00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2081 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3346 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.128 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.157 Fetching changes from the remote Git repository 00:00:00.159 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.187 Using shallow fetch with depth 1 00:00:00.187 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.187 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.434 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.443 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.453 Checking out Revision 3aaeb01851f3410c69bd29d15f29de9bbe186390 (FETCH_HEAD) 00:00:07.453 > git config core.sparsecheckout # timeout=10 00:00:07.463 > git read-tree -mu HEAD # timeout=10 00:00:07.477 > git checkout -f 3aaeb01851f3410c69bd29d15f29de9bbe186390 # timeout=5 00:00:07.494 Commit message: "jenkins/autotest: use known issue detector function from shm lib" 00:00:07.494 > git rev-list --no-walk 3aaeb01851f3410c69bd29d15f29de9bbe186390 # timeout=10 00:00:07.618 [Pipeline] Start of Pipeline 00:00:07.634 [Pipeline] library 00:00:07.635 Loading library shm_lib@master 00:00:07.636 Library shm_lib@master is cached. Copying from home. 00:00:07.651 [Pipeline] node 00:00:07.659 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.661 [Pipeline] { 00:00:07.671 [Pipeline] catchError 00:00:07.673 [Pipeline] { 00:00:07.684 [Pipeline] wrap 00:00:07.693 [Pipeline] { 00:00:07.701 [Pipeline] stage 00:00:07.703 [Pipeline] { (Prologue) 00:00:07.722 [Pipeline] echo 00:00:07.724 Node: VM-host-SM9 00:00:07.730 [Pipeline] cleanWs 00:00:07.741 [WS-CLEANUP] Deleting project workspace... 00:00:07.741 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.747 [WS-CLEANUP] done 00:00:07.928 [Pipeline] setCustomBuildProperty 00:00:08.026 [Pipeline] httpRequest 00:00:08.056 [Pipeline] echo 00:00:08.057 Sorcerer 10.211.164.101 is alive 00:00:08.063 [Pipeline] retry 00:00:08.065 [Pipeline] { 00:00:08.075 [Pipeline] httpRequest 00:00:08.079 HttpMethod: GET 00:00:08.079 URL: http://10.211.164.101/packages/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:08.080 Sending request to url: http://10.211.164.101/packages/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:08.098 Response Code: HTTP/1.1 200 OK 00:00:08.099 Success: Status code 200 is in the accepted range: 200,404 00:00:08.099 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:26.915 [Pipeline] } 00:00:26.932 [Pipeline] // retry 00:00:26.939 [Pipeline] sh 00:00:27.221 + tar --no-same-owner -xf jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:27.237 [Pipeline] httpRequest 00:00:27.260 [Pipeline] echo 00:00:27.262 Sorcerer 10.211.164.101 is alive 00:00:27.271 [Pipeline] retry 00:00:27.274 [Pipeline] { 00:00:27.288 [Pipeline] httpRequest 00:00:27.292 HttpMethod: GET 00:00:27.293 URL: http://10.211.164.101/packages/spdk_227b8322cef040b9932bd4a19ce8c0db4cd734f8.tar.gz 00:00:27.293 Sending request to url: http://10.211.164.101/packages/spdk_227b8322cef040b9932bd4a19ce8c0db4cd734f8.tar.gz 00:00:27.307 Response Code: HTTP/1.1 200 OK 00:00:27.308 Success: Status code 200 is in the accepted range: 200,404 00:00:27.308 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_227b8322cef040b9932bd4a19ce8c0db4cd734f8.tar.gz 00:00:45.227 [Pipeline] } 00:00:45.244 [Pipeline] // retry 00:00:45.252 [Pipeline] sh 00:00:45.532 + tar --no-same-owner -xf spdk_227b8322cef040b9932bd4a19ce8c0db4cd734f8.tar.gz 00:00:48.078 [Pipeline] sh 00:00:48.358 + git -C spdk log --oneline -n5 00:00:48.358 227b8322c module/sock: free addr info before return 00:00:48.358 29119cdfb nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:00:48.358 c7d225385 nvmf/tcp: replace pending_buf_queue with nvmf_tcp_request_get_buffers 00:00:48.358 18ede8d38 nvmf: enable iobuf based queuing for nvmf requests 00:00:48.358 a48eba161 nvmf: change order of functions in the transport.c file 00:00:48.376 [Pipeline] withCredentials 00:00:48.385 > git --version # timeout=10 00:00:48.397 > git --version # 'git version 2.39.2' 00:00:48.412 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:48.414 [Pipeline] { 00:00:48.424 [Pipeline] retry 00:00:48.426 [Pipeline] { 00:00:48.442 [Pipeline] sh 00:00:48.722 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:48.732 [Pipeline] } 00:00:48.749 [Pipeline] // retry 00:00:48.753 [Pipeline] } 00:00:48.769 [Pipeline] // withCredentials 00:00:48.777 [Pipeline] httpRequest 00:00:48.794 [Pipeline] echo 00:00:48.796 Sorcerer 10.211.164.101 is alive 00:00:48.805 [Pipeline] retry 00:00:48.807 [Pipeline] { 00:00:48.819 [Pipeline] httpRequest 00:00:48.824 HttpMethod: GET 00:00:48.824 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:48.824 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:48.829 Response Code: HTTP/1.1 200 OK 00:00:48.830 Success: Status code 200 is in the accepted range: 200,404 00:00:48.830 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:02.532 [Pipeline] } 00:01:02.549 [Pipeline] // retry 00:01:02.558 [Pipeline] sh 00:01:02.838 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:04.756 [Pipeline] sh 00:01:05.038 + git -C dpdk log --oneline -n5 00:01:05.038 caf0f5d395 version: 22.11.4 00:01:05.038 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:05.038 dc9c799c7d vhost: fix missing spinlock unlock 00:01:05.038 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:05.038 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:05.057 [Pipeline] writeFile 00:01:05.071 [Pipeline] sh 00:01:05.352 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:05.364 [Pipeline] sh 00:01:05.703 + cat autorun-spdk.conf 00:01:05.703 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.703 SPDK_TEST_NVME=1 00:01:05.703 SPDK_TEST_FTL=1 00:01:05.703 SPDK_TEST_ISAL=1 00:01:05.703 SPDK_RUN_ASAN=1 00:01:05.703 SPDK_RUN_UBSAN=1 00:01:05.703 SPDK_TEST_XNVME=1 00:01:05.703 SPDK_TEST_NVME_FDP=1 00:01:05.703 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:05.703 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:05.703 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.710 RUN_NIGHTLY=1 00:01:05.711 [Pipeline] } 00:01:05.723 [Pipeline] // stage 00:01:05.736 [Pipeline] stage 00:01:05.738 [Pipeline] { (Run VM) 00:01:05.750 [Pipeline] sh 00:01:06.029 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:06.029 + echo 'Start stage prepare_nvme.sh' 00:01:06.029 Start stage prepare_nvme.sh 00:01:06.029 + [[ -n 3 ]] 00:01:06.029 + disk_prefix=ex3 00:01:06.029 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:06.029 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:06.029 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:06.029 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.029 ++ SPDK_TEST_NVME=1 00:01:06.029 ++ SPDK_TEST_FTL=1 00:01:06.029 ++ SPDK_TEST_ISAL=1 00:01:06.029 ++ SPDK_RUN_ASAN=1 00:01:06.029 ++ SPDK_RUN_UBSAN=1 00:01:06.029 ++ SPDK_TEST_XNVME=1 00:01:06.029 ++ SPDK_TEST_NVME_FDP=1 00:01:06.029 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:06.029 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:06.029 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.029 ++ RUN_NIGHTLY=1 00:01:06.029 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:06.029 + nvme_files=() 00:01:06.029 + declare -A nvme_files 00:01:06.029 + backend_dir=/var/lib/libvirt/images/backends 00:01:06.029 + nvme_files['nvme.img']=5G 00:01:06.029 + nvme_files['nvme-cmb.img']=5G 00:01:06.029 + nvme_files['nvme-multi0.img']=4G 00:01:06.029 + nvme_files['nvme-multi1.img']=4G 00:01:06.029 + nvme_files['nvme-multi2.img']=4G 00:01:06.029 + nvme_files['nvme-openstack.img']=8G 00:01:06.029 + nvme_files['nvme-zns.img']=5G 00:01:06.029 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:06.029 + (( SPDK_TEST_FTL == 1 )) 00:01:06.029 + nvme_files["nvme-ftl.img"]=6G 00:01:06.029 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:06.029 + nvme_files["nvme-fdp.img"]=1G 00:01:06.029 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:06.029 + for nvme in "${!nvme_files[@]}" 00:01:06.029 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:06.029 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.029 + for nvme in "${!nvme_files[@]}" 00:01:06.029 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:06.029 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:06.288 + for nvme in "${!nvme_files[@]}" 00:01:06.288 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:06.288 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.288 + for nvme in "${!nvme_files[@]}" 00:01:06.288 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:06.288 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:06.288 + for nvme in "${!nvme_files[@]}" 00:01:06.288 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:06.288 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.288 + for nvme in "${!nvme_files[@]}" 00:01:06.288 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:06.547 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.547 + for nvme in "${!nvme_files[@]}" 00:01:06.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:06.547 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.547 + for nvme in "${!nvme_files[@]}" 00:01:06.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:01:06.547 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:06.547 + for nvme in "${!nvme_files[@]}" 00:01:06.547 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:06.805 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.805 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:06.805 + echo 'End stage prepare_nvme.sh' 00:01:06.805 End stage prepare_nvme.sh 00:01:06.819 [Pipeline] sh 00:01:07.103 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:07.103 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:07.103 00:01:07.103 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:07.103 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:07.103 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:07.103 HELP=0 00:01:07.103 DRY_RUN=0 00:01:07.103 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:01:07.103 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:07.103 NVME_AUTO_CREATE=0 00:01:07.103 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:01:07.103 NVME_CMB=,,,, 00:01:07.103 NVME_PMR=,,,, 00:01:07.103 NVME_ZNS=,,,, 00:01:07.103 NVME_MS=true,,,, 00:01:07.103 NVME_FDP=,,,on, 00:01:07.103 SPDK_VAGRANT_DISTRO=fedora39 00:01:07.103 SPDK_VAGRANT_VMCPU=10 00:01:07.103 SPDK_VAGRANT_VMRAM=12288 00:01:07.103 SPDK_VAGRANT_PROVIDER=libvirt 00:01:07.103 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:07.103 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:07.103 SPDK_OPENSTACK_NETWORK=0 00:01:07.103 VAGRANT_PACKAGE_BOX=0 00:01:07.103 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:07.103 FORCE_DISTRO=true 00:01:07.103 VAGRANT_BOX_VERSION= 00:01:07.103 EXTRA_VAGRANTFILES= 00:01:07.103 NIC_MODEL=e1000 00:01:07.103 00:01:07.103 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:07.104 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:10.391 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.391 ==> default: Creating image (snapshot of base box volume). 00:01:10.651 ==> default: Creating domain with the following settings... 00:01:10.651 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1723380181_342af5ea3fc8e6883cf2 00:01:10.651 ==> default: -- Domain type: kvm 00:01:10.651 ==> default: -- Cpus: 10 00:01:10.651 ==> default: -- Feature: acpi 00:01:10.651 ==> default: -- Feature: apic 00:01:10.651 ==> default: -- Feature: pae 00:01:10.651 ==> default: -- Memory: 12288M 00:01:10.651 ==> default: -- Memory Backing: hugepages: 00:01:10.651 ==> default: -- Management MAC: 00:01:10.651 ==> default: -- Loader: 00:01:10.651 ==> default: -- Nvram: 00:01:10.651 ==> default: -- Base box: spdk/fedora39 00:01:10.651 ==> default: -- Storage pool: default 00:01:10.651 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1723380181_342af5ea3fc8e6883cf2.img (20G) 00:01:10.651 ==> default: -- Volume Cache: default 00:01:10.651 ==> default: -- Kernel: 00:01:10.651 ==> default: -- Initrd: 00:01:10.651 ==> default: -- Graphics Type: vnc 00:01:10.651 ==> default: -- Graphics Port: -1 00:01:10.651 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.651 ==> default: -- Graphics Password: Not defined 00:01:10.651 ==> default: -- Video Type: cirrus 00:01:10.651 ==> default: -- Video VRAM: 9216 00:01:10.651 ==> default: -- Sound Type: 00:01:10.651 ==> default: -- Keymap: en-us 00:01:10.651 ==> default: -- TPM Path: 00:01:10.651 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.651 ==> default: -- Command line args: 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:10.651 ==> default: -> value=-drive, 00:01:10.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:10.651 ==> default: -> value=-drive, 00:01:10.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:10.651 ==> default: -> value=-drive, 00:01:10.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.651 ==> default: -> value=-drive, 00:01:10.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.651 ==> default: -> value=-drive, 00:01:10.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:10.651 ==> default: -> value=-drive, 00:01:10.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:10.651 ==> default: -> value=-device, 00:01:10.651 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.651 ==> default: Creating shared folders metadata... 00:01:10.651 ==> default: Starting domain. 00:01:12.074 ==> default: Waiting for domain to get an IP address... 00:01:30.157 ==> default: Waiting for SSH to become available... 00:01:31.533 ==> default: Configuring and enabling network interfaces... 00:01:35.742 default: SSH address: 192.168.121.97:22 00:01:35.742 default: SSH username: vagrant 00:01:35.742 default: SSH auth method: private key 00:01:37.647 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.241 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:50.829 ==> default: Mounting SSHFS shared folder... 00:01:51.765 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:51.765 ==> default: Checking Mount.. 00:01:52.702 ==> default: Folder Successfully Mounted! 00:01:52.702 ==> default: Running provisioner: file... 00:01:53.638 default: ~/.gitconfig => .gitconfig 00:01:53.896 00:01:53.896 SUCCESS! 00:01:53.896 00:01:53.896 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:53.897 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.897 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:53.897 00:01:53.905 [Pipeline] } 00:01:53.920 [Pipeline] // stage 00:01:53.929 [Pipeline] dir 00:01:53.930 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:53.933 [Pipeline] { 00:01:53.987 [Pipeline] catchError 00:01:53.990 [Pipeline] { 00:01:54.007 [Pipeline] sh 00:01:54.279 + vagrant ssh-config --host vagrant 00:01:54.279 + sed -ne /^Host/,$p 00:01:54.279 + tee ssh_conf 00:01:58.466 Host vagrant 00:01:58.466 HostName 192.168.121.97 00:01:58.466 User vagrant 00:01:58.466 Port 22 00:01:58.466 UserKnownHostsFile /dev/null 00:01:58.466 StrictHostKeyChecking no 00:01:58.466 PasswordAuthentication no 00:01:58.466 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:58.466 IdentitiesOnly yes 00:01:58.466 LogLevel FATAL 00:01:58.466 ForwardAgent yes 00:01:58.466 ForwardX11 yes 00:01:58.466 00:01:58.480 [Pipeline] withEnv 00:01:58.482 [Pipeline] { 00:01:58.496 [Pipeline] sh 00:01:58.775 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:58.775 source /etc/os-release 00:01:58.775 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.775 # Minimal, systemd-like check. 00:01:58.775 if [[ -e /.dockerenv ]]; then 00:01:58.775 # Clear garbage from the node's name: 00:01:58.775 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.775 # $HOSTNAME is the actual container id 00:01:58.775 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.775 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.775 # We can assume this is a mount from a host where container is running, 00:01:58.775 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.775 container="$(< /etc/hostname) ($agent)" 00:01:58.775 else 00:01:58.775 # Fallback 00:01:58.775 container=$agent 00:01:58.775 fi 00:01:58.775 fi 00:01:58.775 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.775 00:01:59.045 [Pipeline] } 00:01:59.062 [Pipeline] // withEnv 00:01:59.070 [Pipeline] setCustomBuildProperty 00:01:59.085 [Pipeline] stage 00:01:59.087 [Pipeline] { (Tests) 00:01:59.105 [Pipeline] sh 00:01:59.382 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.696 [Pipeline] sh 00:01:59.973 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.244 [Pipeline] timeout 00:02:00.244 Timeout set to expire in 40 min 00:02:00.246 [Pipeline] { 00:02:00.260 [Pipeline] sh 00:02:00.537 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:01.105 HEAD is now at 227b8322c module/sock: free addr info before return 00:02:01.116 [Pipeline] sh 00:02:01.395 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.667 [Pipeline] sh 00:02:01.946 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.220 [Pipeline] sh 00:02:02.498 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:02.757 ++ readlink -f spdk_repo 00:02:02.757 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:02.757 + [[ -n /home/vagrant/spdk_repo ]] 00:02:02.757 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:02.757 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:02.757 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:02.757 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:02.757 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:02.757 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:02.757 + cd /home/vagrant/spdk_repo 00:02:02.757 + source /etc/os-release 00:02:02.757 ++ NAME='Fedora Linux' 00:02:02.757 ++ VERSION='39 (Cloud Edition)' 00:02:02.757 ++ ID=fedora 00:02:02.757 ++ VERSION_ID=39 00:02:02.757 ++ VERSION_CODENAME= 00:02:02.757 ++ PLATFORM_ID=platform:f39 00:02:02.757 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:02.757 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.757 ++ LOGO=fedora-logo-icon 00:02:02.757 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:02.757 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.757 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:02.757 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.757 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.757 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.757 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:02.757 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.757 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:02.757 ++ SUPPORT_END=2024-11-12 00:02:02.757 ++ VARIANT='Cloud Edition' 00:02:02.757 ++ VARIANT_ID=cloud 00:02:02.757 + uname -a 00:02:02.757 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:02.757 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:03.273 Hugepages 00:02:03.273 node hugesize free / total 00:02:03.273 node0 1048576kB 0 / 0 00:02:03.273 node0 2048kB 0 / 0 00:02:03.273 00:02:03.273 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.273 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.532 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.532 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:03.532 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:03.532 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:03.532 + rm -f /tmp/spdk-ld-path 00:02:03.532 + source autorun-spdk.conf 00:02:03.532 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.532 ++ SPDK_TEST_NVME=1 00:02:03.532 ++ SPDK_TEST_FTL=1 00:02:03.532 ++ SPDK_TEST_ISAL=1 00:02:03.532 ++ SPDK_RUN_ASAN=1 00:02:03.532 ++ SPDK_RUN_UBSAN=1 00:02:03.532 ++ SPDK_TEST_XNVME=1 00:02:03.532 ++ SPDK_TEST_NVME_FDP=1 00:02:03.532 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:03.532 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.532 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.532 ++ RUN_NIGHTLY=1 00:02:03.532 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.532 + [[ -n '' ]] 00:02:03.532 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:03.532 + for M in /var/spdk/build-*-manifest.txt 00:02:03.532 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:03.532 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.532 + for M in /var/spdk/build-*-manifest.txt 00:02:03.532 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.532 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.532 + for M in /var/spdk/build-*-manifest.txt 00:02:03.532 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.532 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.532 ++ uname 00:02:03.532 + [[ Linux == \L\i\n\u\x ]] 00:02:03.532 + sudo dmesg -T 00:02:03.532 + sudo dmesg --clear 00:02:03.532 + dmesg_pid=6022 00:02:03.532 + [[ Fedora Linux == FreeBSD ]] 00:02:03.532 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.532 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.532 + sudo dmesg -Tw 00:02:03.532 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.532 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.532 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.532 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.532 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.532 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.532 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.532 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.532 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.532 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.532 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.532 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.532 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.532 Test configuration: 00:02:03.532 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.532 SPDK_TEST_NVME=1 00:02:03.532 SPDK_TEST_FTL=1 00:02:03.532 SPDK_TEST_ISAL=1 00:02:03.532 SPDK_RUN_ASAN=1 00:02:03.532 SPDK_RUN_UBSAN=1 00:02:03.532 SPDK_TEST_XNVME=1 00:02:03.532 SPDK_TEST_NVME_FDP=1 00:02:03.532 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:03.532 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:03.532 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.791 RUN_NIGHTLY=1 12:43:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.791 12:43:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.791 12:43:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.791 12:43:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.791 12:43:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.791 12:43:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.791 12:43:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.791 12:43:55 -- paths/export.sh@5 -- $ export PATH 00:02:03.791 12:43:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.791 12:43:55 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.791 12:43:55 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:03.791 12:43:55 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1723380235.XXXXXX 00:02:03.791 12:43:55 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1723380235.kRBRjg 00:02:03.791 12:43:55 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:03.791 12:43:55 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:02:03.791 12:43:55 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:03.791 12:43:55 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:03.791 12:43:55 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.791 12:43:55 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.791 12:43:55 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:03.791 12:43:55 -- common/autotest_common.sh@394 -- $ xtrace_disable 00:02:03.791 12:43:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.791 12:43:55 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:03.791 12:43:55 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:03.791 12:43:55 -- pm/common@17 -- $ local monitor 00:02:03.791 12:43:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.791 12:43:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.791 12:43:55 -- pm/common@25 -- $ sleep 1 00:02:03.791 12:43:55 -- pm/common@21 -- $ date +%s 00:02:03.791 12:43:55 -- pm/common@21 -- $ date +%s 00:02:03.791 12:43:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1723380235 00:02:03.791 12:43:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1723380235 00:02:03.791 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1723380235_collect-vmstat.pm.log 00:02:03.791 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1723380235_collect-cpu-load.pm.log 00:02:04.727 12:43:56 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:04.727 12:43:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.727 12:43:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.727 12:43:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.727 12:43:56 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.727 Sun Aug 11 12:43:56 PM UTC 2024 00:02:04.727 12:43:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.727 v24.09-pre-396-g227b8322c 00:02:04.727 12:43:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:04.727 12:43:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:04.727 12:43:56 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:04.727 12:43:56 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:04.727 12:43:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.727 ************************************ 00:02:04.727 START TEST asan 00:02:04.727 ************************************ 00:02:04.727 using asan 00:02:04.727 12:43:56 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:04.727 00:02:04.727 real 0m0.000s 00:02:04.727 user 0m0.000s 00:02:04.727 sys 0m0.000s 00:02:04.727 12:43:56 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:04.727 12:43:56 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.727 ************************************ 00:02:04.727 END TEST asan 00:02:04.727 ************************************ 00:02:04.727 12:43:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.727 12:43:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.727 12:43:56 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:04.727 12:43:56 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:04.727 12:43:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.727 ************************************ 00:02:04.727 START TEST ubsan 00:02:04.727 ************************************ 00:02:04.727 using ubsan 00:02:04.727 12:43:56 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:04.727 00:02:04.727 real 0m0.000s 00:02:04.727 user 0m0.000s 00:02:04.727 sys 0m0.000s 00:02:04.727 12:43:56 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:04.727 ************************************ 00:02:04.727 END TEST ubsan 00:02:04.727 ************************************ 00:02:04.727 12:43:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.987 12:43:56 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:04.987 12:43:56 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:04.987 12:43:56 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:04.987 12:43:56 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:04.987 12:43:56 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:04.987 12:43:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.987 ************************************ 00:02:04.987 START TEST build_native_dpdk 00:02:04.987 ************************************ 00:02:04.987 12:43:56 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:04.987 caf0f5d395 version: 22.11.4 00:02:04.987 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:04.987 dc9c799c7d vhost: fix missing spinlock unlock 00:02:04.987 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:04.987 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:04.987 patching file config/rte_config.h 00:02:04.987 Hunk #1 succeeded at 60 (offset 1 line). 00:02:04.987 12:43:56 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:04.987 12:43:56 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:04.988 12:43:56 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:04.988 12:43:56 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:04.988 patching file lib/pcapng/rte_pcapng.c 00:02:04.988 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:04.988 12:43:56 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:04.988 12:43:56 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:04.988 12:43:56 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:04.988 12:43:56 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:04.988 12:43:56 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:10.283 The Meson build system 00:02:10.283 Version: 1.5.0 00:02:10.283 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:10.283 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:10.283 Build type: native build 00:02:10.283 Program cat found: YES (/usr/bin/cat) 00:02:10.283 Project name: DPDK 00:02:10.283 Project version: 22.11.4 00:02:10.283 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:10.283 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:10.283 Host machine cpu family: x86_64 00:02:10.283 Host machine cpu: x86_64 00:02:10.283 Message: ## Building in Developer Mode ## 00:02:10.283 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:10.283 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:10.283 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:10.283 Program objdump found: YES (/usr/bin/objdump) 00:02:10.283 Program python3 found: YES (/usr/bin/python3) 00:02:10.283 Program cat found: YES (/usr/bin/cat) 00:02:10.283 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:10.283 Checking for size of "void *" : 8 00:02:10.283 Checking for size of "void *" : 8 (cached) 00:02:10.283 Library m found: YES 00:02:10.283 Library numa found: YES 00:02:10.283 Has header "numaif.h" : YES 00:02:10.283 Library fdt found: NO 00:02:10.283 Library execinfo found: NO 00:02:10.283 Has header "execinfo.h" : YES 00:02:10.283 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:10.283 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:10.283 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:10.283 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:10.283 Run-time dependency openssl found: YES 3.1.1 00:02:10.283 Run-time dependency libpcap found: YES 1.10.4 00:02:10.283 Has header "pcap.h" with dependency libpcap: YES 00:02:10.283 Compiler for C supports arguments -Wcast-qual: YES 00:02:10.283 Compiler for C supports arguments -Wdeprecated: YES 00:02:10.283 Compiler for C supports arguments -Wformat: YES 00:02:10.283 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:10.283 Compiler for C supports arguments -Wformat-security: NO 00:02:10.283 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.283 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:10.283 Compiler for C supports arguments -Wnested-externs: YES 00:02:10.283 Compiler for C supports arguments -Wold-style-definition: YES 00:02:10.283 Compiler for C supports arguments -Wpointer-arith: YES 00:02:10.283 Compiler for C supports arguments -Wsign-compare: YES 00:02:10.283 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:10.283 Compiler for C supports arguments -Wundef: YES 00:02:10.283 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.283 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:10.283 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:10.283 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.283 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:10.283 Compiler for C supports arguments -mavx512f: YES 00:02:10.283 Checking if "AVX512 checking" compiles: YES 00:02:10.283 Fetching value of define "__SSE4_2__" : 1 00:02:10.283 Fetching value of define "__AES__" : 1 00:02:10.283 Fetching value of define "__AVX__" : 1 00:02:10.284 Fetching value of define "__AVX2__" : 1 00:02:10.284 Fetching value of define "__AVX512BW__" : (undefined) 00:02:10.284 Fetching value of define "__AVX512CD__" : (undefined) 00:02:10.284 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:10.284 Fetching value of define "__AVX512F__" : (undefined) 00:02:10.284 Fetching value of define "__AVX512VL__" : (undefined) 00:02:10.284 Fetching value of define "__PCLMUL__" : 1 00:02:10.284 Fetching value of define "__RDRND__" : 1 00:02:10.284 Fetching value of define "__RDSEED__" : 1 00:02:10.284 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:10.284 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:10.284 Message: lib/kvargs: Defining dependency "kvargs" 00:02:10.284 Message: lib/telemetry: Defining dependency "telemetry" 00:02:10.284 Checking for function "getentropy" : YES 00:02:10.284 Message: lib/eal: Defining dependency "eal" 00:02:10.284 Message: lib/ring: Defining dependency "ring" 00:02:10.284 Message: lib/rcu: Defining dependency "rcu" 00:02:10.284 Message: lib/mempool: Defining dependency "mempool" 00:02:10.284 Message: lib/mbuf: Defining dependency "mbuf" 00:02:10.284 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:10.284 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.284 Compiler for C supports arguments -mpclmul: YES 00:02:10.284 Compiler for C supports arguments -maes: YES 00:02:10.284 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.284 Compiler for C supports arguments -mavx512bw: YES 00:02:10.284 Compiler for C supports arguments -mavx512dq: YES 00:02:10.284 Compiler for C supports arguments -mavx512vl: YES 00:02:10.284 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:10.284 Compiler for C supports arguments -mavx2: YES 00:02:10.284 Compiler for C supports arguments -mavx: YES 00:02:10.284 Message: lib/net: Defining dependency "net" 00:02:10.284 Message: lib/meter: Defining dependency "meter" 00:02:10.284 Message: lib/ethdev: Defining dependency "ethdev" 00:02:10.284 Message: lib/pci: Defining dependency "pci" 00:02:10.284 Message: lib/cmdline: Defining dependency "cmdline" 00:02:10.284 Message: lib/metrics: Defining dependency "metrics" 00:02:10.284 Message: lib/hash: Defining dependency "hash" 00:02:10.284 Message: lib/timer: Defining dependency "timer" 00:02:10.284 Fetching value of define "__AVX2__" : 1 (cached) 00:02:10.284 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.284 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:10.284 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:10.284 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:10.284 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:10.284 Message: lib/acl: Defining dependency "acl" 00:02:10.284 Message: lib/bbdev: Defining dependency "bbdev" 00:02:10.284 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:10.284 Run-time dependency libelf found: YES 0.191 00:02:10.284 Message: lib/bpf: Defining dependency "bpf" 00:02:10.284 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:10.284 Message: lib/compressdev: Defining dependency "compressdev" 00:02:10.284 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:10.284 Message: lib/distributor: Defining dependency "distributor" 00:02:10.284 Message: lib/efd: Defining dependency "efd" 00:02:10.284 Message: lib/eventdev: Defining dependency "eventdev" 00:02:10.284 Message: lib/gpudev: Defining dependency "gpudev" 00:02:10.284 Message: lib/gro: Defining dependency "gro" 00:02:10.284 Message: lib/gso: Defining dependency "gso" 00:02:10.284 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:10.284 Message: lib/jobstats: Defining dependency "jobstats" 00:02:10.284 Message: lib/latencystats: Defining dependency "latencystats" 00:02:10.284 Message: lib/lpm: Defining dependency "lpm" 00:02:10.284 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.284 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:10.284 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:10.284 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:10.284 Message: lib/member: Defining dependency "member" 00:02:10.284 Message: lib/pcapng: Defining dependency "pcapng" 00:02:10.284 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:10.284 Message: lib/power: Defining dependency "power" 00:02:10.284 Message: lib/rawdev: Defining dependency "rawdev" 00:02:10.284 Message: lib/regexdev: Defining dependency "regexdev" 00:02:10.284 Message: lib/dmadev: Defining dependency "dmadev" 00:02:10.284 Message: lib/rib: Defining dependency "rib" 00:02:10.284 Message: lib/reorder: Defining dependency "reorder" 00:02:10.284 Message: lib/sched: Defining dependency "sched" 00:02:10.284 Message: lib/security: Defining dependency "security" 00:02:10.284 Message: lib/stack: Defining dependency "stack" 00:02:10.284 Has header "linux/userfaultfd.h" : YES 00:02:10.284 Message: lib/vhost: Defining dependency "vhost" 00:02:10.284 Message: lib/ipsec: Defining dependency "ipsec" 00:02:10.284 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.284 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:10.284 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:10.284 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:10.284 Message: lib/fib: Defining dependency "fib" 00:02:10.284 Message: lib/port: Defining dependency "port" 00:02:10.284 Message: lib/pdump: Defining dependency "pdump" 00:02:10.284 Message: lib/table: Defining dependency "table" 00:02:10.284 Message: lib/pipeline: Defining dependency "pipeline" 00:02:10.284 Message: lib/graph: Defining dependency "graph" 00:02:10.284 Message: lib/node: Defining dependency "node" 00:02:10.284 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:10.284 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:10.284 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:10.284 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:10.284 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:10.284 Compiler for C supports arguments -Wno-unused-value: YES 00:02:10.284 Compiler for C supports arguments -Wno-format: YES 00:02:10.284 Compiler for C supports arguments -Wno-format-security: YES 00:02:10.284 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:11.660 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:11.660 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:11.660 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:11.660 Fetching value of define "__AVX2__" : 1 (cached) 00:02:11.660 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:11.660 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.660 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:11.660 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:11.660 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:11.660 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:11.660 Configuring doxy-api.conf using configuration 00:02:11.660 Program sphinx-build found: NO 00:02:11.660 Configuring rte_build_config.h using configuration 00:02:11.660 Message: 00:02:11.660 ================= 00:02:11.660 Applications Enabled 00:02:11.660 ================= 00:02:11.660 00:02:11.660 apps: 00:02:11.660 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:11.660 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:11.660 test-security-perf, 00:02:11.660 00:02:11.660 Message: 00:02:11.660 ================= 00:02:11.660 Libraries Enabled 00:02:11.660 ================= 00:02:11.660 00:02:11.660 libs: 00:02:11.660 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:11.660 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:11.660 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:11.660 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:11.660 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:11.660 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:11.660 table, pipeline, graph, node, 00:02:11.660 00:02:11.660 Message: 00:02:11.660 =============== 00:02:11.660 Drivers Enabled 00:02:11.660 =============== 00:02:11.660 00:02:11.660 common: 00:02:11.660 00:02:11.660 bus: 00:02:11.660 pci, vdev, 00:02:11.660 mempool: 00:02:11.660 ring, 00:02:11.660 dma: 00:02:11.660 00:02:11.660 net: 00:02:11.660 i40e, 00:02:11.660 raw: 00:02:11.660 00:02:11.660 crypto: 00:02:11.660 00:02:11.660 compress: 00:02:11.660 00:02:11.660 regex: 00:02:11.660 00:02:11.660 vdpa: 00:02:11.660 00:02:11.660 event: 00:02:11.660 00:02:11.660 baseband: 00:02:11.660 00:02:11.660 gpu: 00:02:11.660 00:02:11.660 00:02:11.660 Message: 00:02:11.660 ================= 00:02:11.660 Content Skipped 00:02:11.660 ================= 00:02:11.660 00:02:11.660 apps: 00:02:11.660 00:02:11.660 libs: 00:02:11.660 kni: explicitly disabled via build config (deprecated lib) 00:02:11.660 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:11.660 00:02:11.660 drivers: 00:02:11.660 common/cpt: not in enabled drivers build config 00:02:11.660 common/dpaax: not in enabled drivers build config 00:02:11.660 common/iavf: not in enabled drivers build config 00:02:11.660 common/idpf: not in enabled drivers build config 00:02:11.660 common/mvep: not in enabled drivers build config 00:02:11.660 common/octeontx: not in enabled drivers build config 00:02:11.660 bus/auxiliary: not in enabled drivers build config 00:02:11.660 bus/dpaa: not in enabled drivers build config 00:02:11.660 bus/fslmc: not in enabled drivers build config 00:02:11.660 bus/ifpga: not in enabled drivers build config 00:02:11.660 bus/vmbus: not in enabled drivers build config 00:02:11.660 common/cnxk: not in enabled drivers build config 00:02:11.660 common/mlx5: not in enabled drivers build config 00:02:11.660 common/qat: not in enabled drivers build config 00:02:11.660 common/sfc_efx: not in enabled drivers build config 00:02:11.660 mempool/bucket: not in enabled drivers build config 00:02:11.660 mempool/cnxk: not in enabled drivers build config 00:02:11.660 mempool/dpaa: not in enabled drivers build config 00:02:11.660 mempool/dpaa2: not in enabled drivers build config 00:02:11.660 mempool/octeontx: not in enabled drivers build config 00:02:11.660 mempool/stack: not in enabled drivers build config 00:02:11.660 dma/cnxk: not in enabled drivers build config 00:02:11.660 dma/dpaa: not in enabled drivers build config 00:02:11.660 dma/dpaa2: not in enabled drivers build config 00:02:11.660 dma/hisilicon: not in enabled drivers build config 00:02:11.660 dma/idxd: not in enabled drivers build config 00:02:11.660 dma/ioat: not in enabled drivers build config 00:02:11.660 dma/skeleton: not in enabled drivers build config 00:02:11.660 net/af_packet: not in enabled drivers build config 00:02:11.660 net/af_xdp: not in enabled drivers build config 00:02:11.660 net/ark: not in enabled drivers build config 00:02:11.660 net/atlantic: not in enabled drivers build config 00:02:11.660 net/avp: not in enabled drivers build config 00:02:11.660 net/axgbe: not in enabled drivers build config 00:02:11.660 net/bnx2x: not in enabled drivers build config 00:02:11.660 net/bnxt: not in enabled drivers build config 00:02:11.660 net/bonding: not in enabled drivers build config 00:02:11.660 net/cnxk: not in enabled drivers build config 00:02:11.660 net/cxgbe: not in enabled drivers build config 00:02:11.660 net/dpaa: not in enabled drivers build config 00:02:11.660 net/dpaa2: not in enabled drivers build config 00:02:11.660 net/e1000: not in enabled drivers build config 00:02:11.660 net/ena: not in enabled drivers build config 00:02:11.660 net/enetc: not in enabled drivers build config 00:02:11.660 net/enetfec: not in enabled drivers build config 00:02:11.660 net/enic: not in enabled drivers build config 00:02:11.660 net/failsafe: not in enabled drivers build config 00:02:11.660 net/fm10k: not in enabled drivers build config 00:02:11.660 net/gve: not in enabled drivers build config 00:02:11.660 net/hinic: not in enabled drivers build config 00:02:11.660 net/hns3: not in enabled drivers build config 00:02:11.660 net/iavf: not in enabled drivers build config 00:02:11.660 net/ice: not in enabled drivers build config 00:02:11.660 net/idpf: not in enabled drivers build config 00:02:11.660 net/igc: not in enabled drivers build config 00:02:11.660 net/ionic: not in enabled drivers build config 00:02:11.660 net/ipn3ke: not in enabled drivers build config 00:02:11.660 net/ixgbe: not in enabled drivers build config 00:02:11.660 net/kni: not in enabled drivers build config 00:02:11.660 net/liquidio: not in enabled drivers build config 00:02:11.660 net/mana: not in enabled drivers build config 00:02:11.660 net/memif: not in enabled drivers build config 00:02:11.660 net/mlx4: not in enabled drivers build config 00:02:11.660 net/mlx5: not in enabled drivers build config 00:02:11.660 net/mvneta: not in enabled drivers build config 00:02:11.660 net/mvpp2: not in enabled drivers build config 00:02:11.660 net/netvsc: not in enabled drivers build config 00:02:11.660 net/nfb: not in enabled drivers build config 00:02:11.660 net/nfp: not in enabled drivers build config 00:02:11.660 net/ngbe: not in enabled drivers build config 00:02:11.660 net/null: not in enabled drivers build config 00:02:11.660 net/octeontx: not in enabled drivers build config 00:02:11.660 net/octeon_ep: not in enabled drivers build config 00:02:11.660 net/pcap: not in enabled drivers build config 00:02:11.660 net/pfe: not in enabled drivers build config 00:02:11.660 net/qede: not in enabled drivers build config 00:02:11.660 net/ring: not in enabled drivers build config 00:02:11.660 net/sfc: not in enabled drivers build config 00:02:11.660 net/softnic: not in enabled drivers build config 00:02:11.660 net/tap: not in enabled drivers build config 00:02:11.660 net/thunderx: not in enabled drivers build config 00:02:11.660 net/txgbe: not in enabled drivers build config 00:02:11.660 net/vdev_netvsc: not in enabled drivers build config 00:02:11.660 net/vhost: not in enabled drivers build config 00:02:11.660 net/virtio: not in enabled drivers build config 00:02:11.660 net/vmxnet3: not in enabled drivers build config 00:02:11.660 raw/cnxk_bphy: not in enabled drivers build config 00:02:11.660 raw/cnxk_gpio: not in enabled drivers build config 00:02:11.660 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:11.660 raw/ifpga: not in enabled drivers build config 00:02:11.660 raw/ntb: not in enabled drivers build config 00:02:11.660 raw/skeleton: not in enabled drivers build config 00:02:11.660 crypto/armv8: not in enabled drivers build config 00:02:11.660 crypto/bcmfs: not in enabled drivers build config 00:02:11.660 crypto/caam_jr: not in enabled drivers build config 00:02:11.660 crypto/ccp: not in enabled drivers build config 00:02:11.661 crypto/cnxk: not in enabled drivers build config 00:02:11.661 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.661 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.661 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.661 crypto/mlx5: not in enabled drivers build config 00:02:11.661 crypto/mvsam: not in enabled drivers build config 00:02:11.661 crypto/nitrox: not in enabled drivers build config 00:02:11.661 crypto/null: not in enabled drivers build config 00:02:11.661 crypto/octeontx: not in enabled drivers build config 00:02:11.661 crypto/openssl: not in enabled drivers build config 00:02:11.661 crypto/scheduler: not in enabled drivers build config 00:02:11.661 crypto/uadk: not in enabled drivers build config 00:02:11.661 crypto/virtio: not in enabled drivers build config 00:02:11.661 compress/isal: not in enabled drivers build config 00:02:11.661 compress/mlx5: not in enabled drivers build config 00:02:11.661 compress/octeontx: not in enabled drivers build config 00:02:11.661 compress/zlib: not in enabled drivers build config 00:02:11.661 regex/mlx5: not in enabled drivers build config 00:02:11.661 regex/cn9k: not in enabled drivers build config 00:02:11.661 vdpa/ifc: not in enabled drivers build config 00:02:11.661 vdpa/mlx5: not in enabled drivers build config 00:02:11.661 vdpa/sfc: not in enabled drivers build config 00:02:11.661 event/cnxk: not in enabled drivers build config 00:02:11.661 event/dlb2: not in enabled drivers build config 00:02:11.661 event/dpaa: not in enabled drivers build config 00:02:11.661 event/dpaa2: not in enabled drivers build config 00:02:11.661 event/dsw: not in enabled drivers build config 00:02:11.661 event/opdl: not in enabled drivers build config 00:02:11.661 event/skeleton: not in enabled drivers build config 00:02:11.661 event/sw: not in enabled drivers build config 00:02:11.661 event/octeontx: not in enabled drivers build config 00:02:11.661 baseband/acc: not in enabled drivers build config 00:02:11.661 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:11.661 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:11.661 baseband/la12xx: not in enabled drivers build config 00:02:11.661 baseband/null: not in enabled drivers build config 00:02:11.661 baseband/turbo_sw: not in enabled drivers build config 00:02:11.661 gpu/cuda: not in enabled drivers build config 00:02:11.661 00:02:11.661 00:02:11.661 Build targets in project: 314 00:02:11.661 00:02:11.661 DPDK 22.11.4 00:02:11.661 00:02:11.661 User defined options 00:02:11.661 libdir : lib 00:02:11.661 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:11.661 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:11.661 c_link_args : 00:02:11.661 enable_docs : false 00:02:11.661 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:11.661 enable_kmods : false 00:02:11.661 machine : native 00:02:11.661 tests : false 00:02:11.661 00:02:11.661 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.661 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:11.919 12:44:03 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:11.919 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:11.919 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:11.919 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:11.919 [3/743] Generating lib/rte_kvargs_def with a custom command 00:02:11.919 [4/743] Generating lib/rte_telemetry_def with a custom command 00:02:12.178 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.178 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.178 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.178 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.178 [9/743] Linking static target lib/librte_kvargs.a 00:02:12.178 [10/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.178 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.178 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.178 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.178 [14/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.178 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.436 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.436 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.436 [18/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.436 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.436 [20/743] Linking target lib/librte_kvargs.so.23.0 00:02:12.436 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.436 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:12.436 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.436 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.436 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.436 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:12.695 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:12.695 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:12.695 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:12.695 [30/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:12.695 [31/743] Linking static target lib/librte_telemetry.a 00:02:12.695 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:12.695 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.695 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:12.695 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.695 [36/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:12.953 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.953 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.953 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.953 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.953 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.953 [42/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.953 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.953 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:12.953 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.211 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.211 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:13.211 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:13.211 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:13.211 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.211 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.211 [52/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.211 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.211 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.211 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.211 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.211 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.211 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.469 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.469 [60/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.469 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.469 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.469 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.469 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.469 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:13.469 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.469 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.469 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:13.469 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.469 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:13.727 [71/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.727 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:13.727 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:13.727 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.727 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.727 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.727 [77/743] Generating lib/rte_eal_def with a custom command 00:02:13.727 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:13.727 [79/743] Generating lib/rte_ring_def with a custom command 00:02:13.727 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.727 [81/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.727 [82/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.727 [83/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.727 [84/743] Generating lib/rte_ring_mingw with a custom command 00:02:13.727 [85/743] Generating lib/rte_rcu_def with a custom command 00:02:13.727 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:13.986 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.986 [88/743] Linking static target lib/librte_ring.a 00:02:13.986 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.986 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:13.986 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:13.986 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.986 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.244 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.244 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.244 [96/743] Linking static target lib/librte_eal.a 00:02:14.503 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.503 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:14.503 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.503 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.503 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:14.503 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.503 [103/743] Linking static target lib/librte_rcu.a 00:02:14.503 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.761 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.761 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.761 [107/743] Linking static target lib/librte_mempool.a 00:02:14.761 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.019 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.019 [110/743] Generating lib/rte_net_def with a custom command 00:02:15.019 [111/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.019 [112/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.019 [113/743] Generating lib/rte_net_mingw with a custom command 00:02:15.019 [114/743] Generating lib/rte_meter_def with a custom command 00:02:15.019 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:15.019 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.277 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.277 [118/743] Linking static target lib/librte_meter.a 00:02:15.277 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.277 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.277 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.277 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.536 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.536 [124/743] Linking static target lib/librte_mbuf.a 00:02:15.536 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.536 [126/743] Linking static target lib/librte_net.a 00:02:15.536 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.794 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.794 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.794 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.054 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.054 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.054 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.054 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.312 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.570 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.570 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:16.570 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:16.829 [139/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.829 [140/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.829 [141/743] Linking static target lib/librte_pci.a 00:02:16.829 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.829 [143/743] Generating lib/rte_pci_mingw with a custom command 00:02:16.829 [144/743] Generating lib/rte_pci_def with a custom command 00:02:16.829 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.829 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.829 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.829 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.829 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.087 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.087 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.087 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.087 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.087 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.087 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.087 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.087 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:17.087 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:17.087 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.087 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:17.087 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.087 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:17.346 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:17.346 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.346 [165/743] Generating lib/rte_hash_def with a custom command 00:02:17.346 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.346 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:17.346 [168/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.346 [169/743] Generating lib/rte_timer_def with a custom command 00:02:17.346 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:17.346 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.346 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.346 [173/743] Linking static target lib/librte_cmdline.a 00:02:17.913 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:17.913 [175/743] Linking static target lib/librte_metrics.a 00:02:17.913 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.913 [177/743] Linking static target lib/librte_timer.a 00:02:18.171 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.171 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.171 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.171 [181/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:18.171 [182/743] Linking static target lib/librte_ethdev.a 00:02:18.430 [183/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.430 [184/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.996 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:18.996 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:18.996 [187/743] Generating lib/rte_acl_def with a custom command 00:02:18.996 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:18.997 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:18.997 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:18.997 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:18.997 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:18.997 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:19.255 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:19.514 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:19.514 [196/743] Linking static target lib/librte_bitratestats.a 00:02:19.773 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:19.773 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.773 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:19.773 [200/743] Linking static target lib/librte_bbdev.a 00:02:20.031 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:20.289 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.289 [203/743] Linking static target lib/librte_hash.a 00:02:20.289 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:20.547 [205/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:20.548 [206/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:20.548 [207/743] Linking static target lib/acl/libavx512_tmp.a 00:02:20.548 [208/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.548 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:20.806 [210/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:21.080 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:21.080 [212/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.080 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:21.080 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:21.080 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:21.080 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:21.080 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:21.080 [218/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:21.080 [219/743] Linking static target lib/librte_acl.a 00:02:21.384 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:21.384 [221/743] Linking static target lib/librte_cfgfile.a 00:02:21.384 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:21.384 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:21.384 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:21.384 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.384 [226/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.651 [227/743] Linking target lib/librte_eal.so.23.0 00:02:21.651 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.651 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:21.651 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:21.651 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:21.651 [232/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:21.651 [233/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:21.651 [234/743] Linking target lib/librte_ring.so.23.0 00:02:21.651 [235/743] Linking target lib/librte_meter.so.23.0 00:02:21.909 [236/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:21.910 [237/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:21.910 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:21.910 [239/743] Linking target lib/librte_pci.so.23.0 00:02:21.910 [240/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:21.910 [241/743] Linking target lib/librte_timer.so.23.0 00:02:21.910 [242/743] Linking target lib/librte_rcu.so.23.0 00:02:21.910 [243/743] Linking target lib/librte_mempool.so.23.0 00:02:21.910 [244/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:21.910 [245/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:21.910 [246/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:21.910 [247/743] Linking static target lib/librte_bpf.a 00:02:21.910 [248/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:21.910 [249/743] Linking target lib/librte_acl.so.23.0 00:02:21.910 [250/743] Linking static target lib/librte_compressdev.a 00:02:21.910 [251/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:21.910 [252/743] Linking target lib/librte_cfgfile.so.23.0 00:02:22.167 [253/743] Linking target lib/librte_mbuf.so.23.0 00:02:22.167 [254/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.167 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:22.167 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:22.167 [257/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:22.167 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:02:22.167 [259/743] Linking target lib/librte_net.so.23.0 00:02:22.167 [260/743] Linking target lib/librte_bbdev.so.23.0 00:02:22.425 [261/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.425 [262/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.425 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:22.425 [264/743] Generating lib/rte_efd_def with a custom command 00:02:22.425 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:22.425 [266/743] Linking target lib/librte_cmdline.so.23.0 00:02:22.425 [267/743] Linking target lib/librte_hash.so.23.0 00:02:22.683 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:22.683 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:22.683 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:22.683 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:22.942 [272/743] Linking static target lib/librte_distributor.a 00:02:22.942 [273/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.942 [274/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.942 [275/743] Linking target lib/librte_compressdev.so.23.0 00:02:22.942 [276/743] Linking target lib/librte_ethdev.so.23.0 00:02:22.942 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:23.200 [278/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.200 [279/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:23.200 [280/743] Linking target lib/librte_distributor.so.23.0 00:02:23.200 [281/743] Generating lib/rte_eventdev_def with a custom command 00:02:23.200 [282/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:23.200 [283/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:23.200 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:23.200 [285/743] Linking target lib/librte_bpf.so.23.0 00:02:23.458 [286/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:23.458 [287/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:23.458 [288/743] Linking target lib/librte_bitratestats.so.23.0 00:02:23.458 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:23.458 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:23.458 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:23.716 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:23.974 [293/743] Linking static target lib/librte_efd.a 00:02:23.974 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:23.974 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:23.974 [296/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.974 [297/743] Linking static target lib/librte_cryptodev.a 00:02:24.232 [298/743] Linking target lib/librte_efd.so.23.0 00:02:24.232 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:24.232 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:24.232 [301/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:24.232 [302/743] Linking static target lib/librte_gpudev.a 00:02:24.232 [303/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:24.232 [304/743] Generating lib/rte_gro_def with a custom command 00:02:24.232 [305/743] Generating lib/rte_gro_mingw with a custom command 00:02:24.490 [306/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:24.490 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:24.747 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:25.005 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:25.005 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:25.005 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:25.005 [312/743] Generating lib/rte_gso_def with a custom command 00:02:25.005 [313/743] Generating lib/rte_gso_mingw with a custom command 00:02:25.005 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:25.005 [315/743] Linking static target lib/librte_gro.a 00:02:25.005 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.263 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:25.263 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:25.263 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:25.263 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.263 [321/743] Linking target lib/librte_gro.so.23.0 00:02:25.521 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:25.521 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:25.521 [324/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:25.521 [325/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:25.521 [326/743] Linking static target lib/librte_eventdev.a 00:02:25.521 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:25.521 [328/743] Linking static target lib/librte_gso.a 00:02:25.779 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:25.779 [330/743] Linking static target lib/librte_jobstats.a 00:02:25.779 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:25.779 [332/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.779 [333/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:25.779 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:25.779 [335/743] Linking target lib/librte_gso.so.23.0 00:02:25.779 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:25.779 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:26.037 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:26.037 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:26.037 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:26.037 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:26.037 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:02:26.037 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.037 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:26.037 [345/743] Linking target lib/librte_jobstats.so.23.0 00:02:26.295 [346/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.295 [347/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:26.295 [348/743] Linking static target lib/librte_ip_frag.a 00:02:26.295 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:26.295 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:26.552 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.552 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:26.552 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:26.552 [354/743] Linking static target lib/librte_latencystats.a 00:02:26.811 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:26.811 [356/743] Generating lib/rte_member_def with a custom command 00:02:26.811 [357/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:26.811 [358/743] Generating lib/rte_member_mingw with a custom command 00:02:26.811 [359/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:26.811 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:26.811 [361/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:26.811 [362/743] Generating lib/rte_pcapng_def with a custom command 00:02:26.811 [363/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:26.811 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.811 [365/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.811 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:26.811 [367/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:27.068 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:27.068 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:27.068 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:27.326 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:27.326 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:27.326 [373/743] Linking static target lib/librte_lpm.a 00:02:27.326 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:27.326 [375/743] Generating lib/rte_power_def with a custom command 00:02:27.584 [376/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.584 [377/743] Generating lib/rte_power_mingw with a custom command 00:02:27.584 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:27.584 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:27.584 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:27.584 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:27.584 [382/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:27.584 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:27.584 [384/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.584 [385/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.584 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:27.841 [387/743] Linking target lib/librte_lpm.so.23.0 00:02:27.841 [388/743] Generating lib/rte_dmadev_def with a custom command 00:02:27.841 [389/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:27.842 [390/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:27.842 [391/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:27.842 [392/743] Linking static target lib/librte_pcapng.a 00:02:27.842 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.842 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:27.842 [395/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:27.842 [396/743] Linking static target lib/librte_rawdev.a 00:02:27.842 [397/743] Generating lib/rte_rib_def with a custom command 00:02:27.842 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:27.842 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:27.842 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:28.098 [401/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.098 [402/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.098 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:28.098 [404/743] Linking static target lib/librte_power.a 00:02:28.098 [405/743] Linking static target lib/librte_dmadev.a 00:02:28.098 [406/743] Linking target lib/librte_pcapng.so.23.0 00:02:28.355 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:28.355 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.355 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:28.355 [410/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:28.355 [411/743] Linking static target lib/librte_regexdev.a 00:02:28.355 [412/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:28.355 [413/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:28.355 [414/743] Linking static target lib/librte_member.a 00:02:28.355 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:28.613 [416/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:28.613 [417/743] Generating lib/rte_sched_def with a custom command 00:02:28.613 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:28.613 [419/743] Generating lib/rte_security_def with a custom command 00:02:28.613 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:28.613 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:28.613 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.613 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:28.613 [424/743] Linking target lib/librte_dmadev.so.23.0 00:02:28.613 [425/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.613 [426/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:28.613 [427/743] Linking static target lib/librte_reorder.a 00:02:28.871 [428/743] Generating lib/rte_stack_def with a custom command 00:02:28.871 [429/743] Generating lib/rte_stack_mingw with a custom command 00:02:28.871 [430/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.871 [431/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:28.871 [432/743] Linking static target lib/librte_stack.a 00:02:28.871 [433/743] Linking target lib/librte_member.so.23.0 00:02:28.871 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:28.871 [435/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.871 [436/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.871 [437/743] Linking target lib/librte_reorder.so.23.0 00:02:29.130 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:29.130 [439/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.130 [440/743] Linking static target lib/librte_rib.a 00:02:29.130 [441/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.130 [442/743] Linking target lib/librte_stack.so.23.0 00:02:29.130 [443/743] Linking target lib/librte_power.so.23.0 00:02:29.130 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.130 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:29.387 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.387 [447/743] Linking static target lib/librte_security.a 00:02:29.387 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.387 [449/743] Linking target lib/librte_rib.so.23.0 00:02:29.645 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:29.645 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.645 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:29.645 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:29.645 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.903 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.903 [456/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.903 [457/743] Linking target lib/librte_security.so.23.0 00:02:29.903 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:29.903 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:29.903 [460/743] Linking static target lib/librte_sched.a 00:02:30.468 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.468 [462/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:30.468 [463/743] Linking target lib/librte_sched.so.23.0 00:02:30.468 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.468 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:30.468 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:30.468 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:30.468 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:30.726 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.726 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:30.726 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:30.984 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:31.242 [473/743] Generating lib/rte_fib_def with a custom command 00:02:31.242 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:31.242 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:31.242 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:31.242 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:31.242 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:31.242 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:31.500 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:31.500 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:31.500 [482/743] Linking static target lib/librte_ipsec.a 00:02:31.758 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.015 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:32.015 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:32.015 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:32.015 [487/743] Linking static target lib/librte_fib.a 00:02:32.273 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:32.273 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:32.273 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:32.273 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:32.531 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.531 [493/743] Linking target lib/librte_fib.so.23.0 00:02:32.531 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:33.096 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:33.096 [496/743] Generating lib/rte_port_def with a custom command 00:02:33.096 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:33.096 [498/743] Generating lib/rte_port_mingw with a custom command 00:02:33.096 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:33.096 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:33.096 [501/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:33.096 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:33.354 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:33.354 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:33.612 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:33.612 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:33.612 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:33.612 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:33.612 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:33.612 [510/743] Linking static target lib/librte_port.a 00:02:33.870 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:34.128 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:34.128 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:34.128 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.386 [515/743] Linking target lib/librte_port.so.23.0 00:02:34.386 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:34.386 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:34.386 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:34.386 [519/743] Linking static target lib/librte_pdump.a 00:02:34.386 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:34.645 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.645 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:34.903 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:34.903 [524/743] Generating lib/rte_table_def with a custom command 00:02:34.903 [525/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:34.903 [526/743] Generating lib/rte_table_mingw with a custom command 00:02:35.161 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:35.161 [528/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.161 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:35.419 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:35.419 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:35.419 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:35.419 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:35.677 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:35.677 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:35.677 [536/743] Linking static target lib/librte_table.a 00:02:35.677 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:36.243 [538/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:36.243 [539/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:36.243 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.243 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:36.243 [542/743] Linking target lib/librte_table.so.23.0 00:02:36.243 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:36.501 [544/743] Generating lib/rte_graph_def with a custom command 00:02:36.501 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:36.501 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:36.759 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:36.759 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:37.017 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:37.017 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:37.018 [551/743] Linking static target lib/librte_graph.a 00:02:37.018 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:37.283 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:37.283 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:37.283 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:37.572 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:37.572 [557/743] Generating lib/rte_node_def with a custom command 00:02:37.572 [558/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:37.841 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:37.841 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.841 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.841 [562/743] Linking target lib/librte_graph.so.23.0 00:02:37.841 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:38.099 [564/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:38.099 [565/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:38.099 [566/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:38.099 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:38.099 [568/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.099 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:38.099 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:38.099 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:38.099 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:38.099 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:38.357 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:38.357 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:38.357 [576/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.357 [577/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:38.357 [578/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.357 [579/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:38.357 [580/743] Linking static target lib/librte_node.a 00:02:38.357 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.616 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:38.616 [583/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.616 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.616 [585/743] Linking static target drivers/librte_bus_vdev.a 00:02:38.616 [586/743] Linking target lib/librte_node.so.23.0 00:02:38.616 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.874 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.874 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.874 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.874 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:39.131 [592/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:39.131 [593/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:39.131 [594/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.131 [595/743] Linking static target drivers/librte_bus_pci.a 00:02:39.131 [596/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:39.389 [597/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.389 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:39.389 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:39.389 [600/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.389 [601/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:39.389 [602/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:39.389 [603/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.647 [604/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:39.647 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:39.647 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:39.647 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.647 [608/743] Linking static target drivers/librte_mempool_ring.a 00:02:39.647 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.648 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:40.214 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:40.473 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:40.731 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:40.731 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:40.989 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:41.247 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:41.247 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:41.817 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:41.817 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:41.817 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:42.080 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:42.080 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:42.080 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:42.080 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:42.080 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:43.017 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:43.275 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:43.275 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:43.533 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:43.533 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:43.533 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:43.791 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:43.791 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:43.791 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:44.049 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:44.050 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:44.615 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:44.615 [638/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:44.615 [639/743] Linking static target lib/librte_vhost.a 00:02:44.615 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:44.615 [641/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:44.615 [642/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:44.873 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:45.130 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:45.130 [645/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:45.130 [646/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.130 [647/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.130 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:45.130 [649/743] Linking static target drivers/librte_net_i40e.a 00:02:45.388 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:45.388 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:45.646 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:45.903 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.903 [654/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.903 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:45.903 [656/743] Linking target lib/librte_vhost.so.23.0 00:02:45.903 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:45.903 [658/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:46.161 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:46.418 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:46.418 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:46.675 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:46.675 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:46.675 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:46.675 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:46.675 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:46.675 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:46.933 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:46.933 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:47.191 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:47.449 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:47.706 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:47.706 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:48.272 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:48.272 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:48.272 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:48.530 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:48.530 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:48.787 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:48.787 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:49.045 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:49.045 [682/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:49.303 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:49.303 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:49.303 [685/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:49.561 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:49.562 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:49.820 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:49.820 [689/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:49.820 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:49.820 [691/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:49.820 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:50.077 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:50.077 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:50.644 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:50.644 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:50.644 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:50.901 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:50.901 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:51.160 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:51.160 [701/743] Linking static target lib/librte_pipeline.a 00:02:51.418 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:51.418 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:51.418 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:51.676 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:51.933 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:51.933 [707/743] Linking target app/dpdk-pdump 00:02:51.933 [708/743] Linking target app/dpdk-dumpcap 00:02:51.933 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:51.933 [710/743] Linking target app/dpdk-proc-info 00:02:52.190 [711/743] Linking target app/dpdk-test-acl 00:02:52.191 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:52.191 [713/743] Linking target app/dpdk-test-bbdev 00:02:52.191 [714/743] Linking target app/dpdk-test-cmdline 00:02:52.449 [715/743] Linking target app/dpdk-test-compress-perf 00:02:52.449 [716/743] Linking target app/dpdk-test-crypto-perf 00:02:52.449 [717/743] Linking target app/dpdk-test-eventdev 00:02:52.707 [718/743] Linking target app/dpdk-test-fib 00:02:52.707 [719/743] Linking target app/dpdk-test-flow-perf 00:02:52.707 [720/743] Linking target app/dpdk-test-gpudev 00:02:52.707 [721/743] Linking target app/dpdk-test-pipeline 00:02:53.273 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:53.273 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:53.531 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:53.531 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:53.531 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:53.531 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:53.790 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.790 [729/743] Linking target lib/librte_pipeline.so.23.0 00:02:54.048 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:54.048 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:54.307 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:54.307 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:54.565 [734/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:54.565 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:54.565 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:54.837 [737/743] Linking target app/dpdk-test-sad 00:02:54.837 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:54.837 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:55.129 [740/743] Linking target app/dpdk-test-regex 00:02:55.395 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:55.395 [742/743] Linking target app/dpdk-testpmd 00:02:55.654 [743/743] Linking target app/dpdk-test-security-perf 00:02:55.654 12:44:47 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:55.654 12:44:47 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:55.654 12:44:47 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:55.654 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:55.654 [0/1] Installing files. 00:02:55.914 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:55.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:55.916 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.178 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.179 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.180 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:56.181 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:56.181 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.181 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.443 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.444 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.444 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.444 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.444 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.444 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.444 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.445 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.446 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.447 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:56.448 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:56.448 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:56.448 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:56.448 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:56.448 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:56.448 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:56.448 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:56.448 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:56.448 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:56.448 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:56.448 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:56.448 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:56.448 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:56.448 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:56.448 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:56.448 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:56.448 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:56.449 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:56.449 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:56.449 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:56.449 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:56.449 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:56.449 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:56.449 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:56.449 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:56.449 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:56.449 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:56.449 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:56.449 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:56.449 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:56.449 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:56.449 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:56.449 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:56.449 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:56.449 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:56.449 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:56.449 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:56.449 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:56.449 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:56.449 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:56.449 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:56.449 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:56.449 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:56.449 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:56.449 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:56.449 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:56.449 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:56.449 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:56.449 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:56.449 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:56.449 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:56.449 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:56.449 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:56.449 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:56.449 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:56.449 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:56.449 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:56.449 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:56.449 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:56.449 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:56.449 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:56.449 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:56.449 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:56.449 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:56.449 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:56.449 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:56.449 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:56.449 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:56.449 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:56.449 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:56.449 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:56.449 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:56.449 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:56.449 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:56.449 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:56.449 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:56.449 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:56.449 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:56.449 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:56.449 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:56.449 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:56.449 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:56.449 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:56.449 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:56.449 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:56.450 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:56.450 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:56.450 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:56.450 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:56.450 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:56.450 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:56.450 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:56.450 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:56.450 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:56.450 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:56.450 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:56.450 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:56.450 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:56.450 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:56.450 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:56.450 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:56.450 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:56.450 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:56.450 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:56.450 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:56.450 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:56.450 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:56.450 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:56.450 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:56.450 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:56.450 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:56.450 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:56.450 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:56.450 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:56.450 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:56.450 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:56.450 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:56.450 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:56.450 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:56.450 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:56.450 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:56.450 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:56.450 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:56.450 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:56.450 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:56.450 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:56.450 12:44:48 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:56.450 12:44:48 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:56.450 00:02:56.450 real 0m51.680s 00:02:56.450 user 6m8.763s 00:02:56.450 sys 0m54.751s 00:02:56.450 ************************************ 00:02:56.450 END TEST build_native_dpdk 00:02:56.450 ************************************ 00:02:56.450 12:44:48 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:56.450 12:44:48 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:56.709 12:44:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:56.709 12:44:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:56.709 12:44:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:56.709 12:44:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:56.709 12:44:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:56.709 12:44:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:56.709 12:44:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:56.709 12:44:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:56.709 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:56.968 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.968 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:56.968 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:57.227 Using 'verbs' RDMA provider 00:03:10.387 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:25.265 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:25.265 Creating mk/config.mk...done. 00:03:25.265 Creating mk/cc.flags.mk...done. 00:03:25.265 Type 'make' to build. 00:03:25.265 12:45:14 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:25.265 12:45:14 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:25.265 12:45:14 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:25.265 12:45:14 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.265 ************************************ 00:03:25.265 START TEST make 00:03:25.265 ************************************ 00:03:25.265 12:45:14 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:25.265 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:25.265 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:25.265 meson setup builddir \ 00:03:25.265 -Dwith-libaio=enabled \ 00:03:25.265 -Dwith-liburing=enabled \ 00:03:25.265 -Dwith-libvfn=disabled \ 00:03:25.265 -Dwith-spdk=false && \ 00:03:25.265 meson compile -C builddir && \ 00:03:25.265 cd -) 00:03:25.265 make[1]: Nothing to be done for 'all'. 00:03:26.641 The Meson build system 00:03:26.641 Version: 1.5.0 00:03:26.641 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:26.641 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:26.641 Build type: native build 00:03:26.641 Project name: xnvme 00:03:26.641 Project version: 0.7.3 00:03:26.641 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:26.641 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:26.641 Host machine cpu family: x86_64 00:03:26.641 Host machine cpu: x86_64 00:03:26.641 Message: host_machine.system: linux 00:03:26.641 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:26.641 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:26.641 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:26.641 Run-time dependency threads found: YES 00:03:26.641 Has header "setupapi.h" : NO 00:03:26.641 Has header "linux/blkzoned.h" : YES 00:03:26.641 Has header "linux/blkzoned.h" : YES (cached) 00:03:26.641 Has header "libaio.h" : YES 00:03:26.641 Library aio found: YES 00:03:26.641 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:26.641 Run-time dependency liburing found: YES 2.2 00:03:26.641 Dependency libvfn skipped: feature with-libvfn disabled 00:03:26.641 Run-time dependency appleframeworks found: NO (tried framework) 00:03:26.641 Run-time dependency appleframeworks found: NO (tried framework) 00:03:26.641 Configuring xnvme_config.h using configuration 00:03:26.641 Configuring xnvme.spec using configuration 00:03:26.641 Run-time dependency bash-completion found: YES 2.11 00:03:26.641 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:26.641 Program cp found: YES (/usr/bin/cp) 00:03:26.641 Has header "winsock2.h" : NO 00:03:26.641 Has header "dbghelp.h" : NO 00:03:26.641 Library rpcrt4 found: NO 00:03:26.641 Library rt found: YES 00:03:26.641 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:26.641 Found CMake: /usr/bin/cmake (3.27.7) 00:03:26.641 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:26.641 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:26.641 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:26.641 Build targets in project: 32 00:03:26.641 00:03:26.641 xnvme 0.7.3 00:03:26.641 00:03:26.641 User defined options 00:03:26.641 with-libaio : enabled 00:03:26.641 with-liburing: enabled 00:03:26.641 with-libvfn : disabled 00:03:26.641 with-spdk : false 00:03:26.641 00:03:26.641 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:27.208 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:27.208 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:27.208 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:27.208 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:27.208 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:27.208 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:27.208 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:27.208 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:27.208 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:27.208 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:27.208 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:27.208 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:27.208 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:27.208 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:27.208 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:27.208 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:27.467 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:27.467 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:27.467 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:27.467 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:27.467 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:27.467 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:27.467 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:27.467 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:27.467 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:27.467 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:27.467 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:27.467 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:27.467 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:27.467 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:27.467 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:27.467 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:27.467 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:27.467 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:27.467 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:27.467 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:27.467 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:27.467 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:27.467 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:27.467 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:27.467 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:27.467 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:27.725 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:27.725 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:27.725 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:27.725 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:27.725 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:27.725 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:27.725 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:27.725 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:27.725 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:27.725 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:27.725 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:27.725 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:27.725 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:27.725 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:27.725 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:27.725 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:27.725 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:27.725 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:27.725 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:27.725 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:27.725 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:27.725 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:27.725 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:27.983 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:27.983 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:27.983 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:27.983 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:27.983 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:27.983 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:27.983 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:27.983 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:27.983 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:27.983 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:27.983 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:27.983 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:27.983 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:28.241 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:28.241 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:28.241 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:28.241 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:28.241 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:28.241 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:28.241 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:28.241 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:28.241 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:28.241 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:28.241 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:28.241 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:28.241 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:28.241 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:28.500 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:28.500 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:28.500 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:28.500 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:28.500 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:28.500 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:28.500 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:28.500 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:28.500 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:28.500 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:28.500 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:28.500 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:28.500 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:28.500 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:28.500 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:28.500 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:28.500 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:28.500 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:28.500 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:28.500 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:28.500 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:28.500 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:28.500 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:28.500 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:28.500 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:28.500 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:28.500 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:28.500 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:28.500 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:28.500 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:28.500 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:28.759 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:28.759 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:28.759 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:28.759 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:28.759 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:28.759 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:28.759 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:28.759 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:28.759 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:28.759 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:28.759 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:28.759 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:28.759 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:28.759 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:28.759 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:28.759 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:29.018 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:29.018 [140/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:29.018 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:29.018 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:29.018 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:29.018 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:29.018 [145/203] Linking target lib/libxnvme.so 00:03:29.018 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:29.018 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:29.018 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:29.018 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:29.018 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:29.018 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:29.276 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:29.276 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:29.276 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:29.276 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:29.276 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:29.276 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:29.276 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:29.276 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:29.276 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:29.276 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:29.276 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:29.276 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:29.534 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:29.534 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:29.534 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:29.534 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:29.534 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:29.534 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:29.534 [170/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:29.534 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:29.793 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:29.793 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:29.793 [174/203] Linking static target lib/libxnvme.a 00:03:29.793 [175/203] Linking target tests/xnvme_tests_lblk 00:03:29.793 [176/203] Linking target tests/xnvme_tests_cli 00:03:29.793 [177/203] Linking target tests/xnvme_tests_buf 00:03:29.793 [178/203] Linking target tests/xnvme_tests_scc 00:03:29.793 [179/203] Linking target tests/xnvme_tests_enum 00:03:29.793 [180/203] Linking target tests/xnvme_tests_ioworker 00:03:29.793 [181/203] Linking target tests/xnvme_tests_xnvme_file 00:03:29.793 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:29.793 [183/203] Linking target tests/xnvme_tests_async_intf 00:03:29.793 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:29.793 [185/203] Linking target tests/xnvme_tests_znd_append 00:03:29.793 [186/203] Linking target tests/xnvme_tests_znd_state 00:03:29.793 [187/203] Linking target tests/xnvme_tests_kvs 00:03:29.793 [188/203] Linking target tools/xdd 00:03:29.793 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:29.793 [190/203] Linking target tests/xnvme_tests_map 00:03:29.793 [191/203] Linking target tools/lblk 00:03:30.051 [192/203] Linking target tools/xnvme_file 00:03:30.051 [193/203] Linking target examples/xnvme_dev 00:03:30.051 [194/203] Linking target tools/xnvme 00:03:30.051 [195/203] Linking target tools/kvs 00:03:30.051 [196/203] Linking target tools/zoned 00:03:30.051 [197/203] Linking target examples/zoned_io_async 00:03:30.051 [198/203] Linking target examples/xnvme_io_async 00:03:30.051 [199/203] Linking target examples/xnvme_enum 00:03:30.051 [200/203] Linking target examples/xnvme_single_sync 00:03:30.051 [201/203] Linking target examples/xnvme_hello 00:03:30.051 [202/203] Linking target examples/xnvme_single_async 00:03:30.051 [203/203] Linking target examples/zoned_io_sync 00:03:30.051 INFO: autodetecting backend as ninja 00:03:30.051 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:30.051 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:52.000 CC lib/log/log.o 00:03:52.000 CC lib/log/log_flags.o 00:03:52.000 CC lib/ut/ut.o 00:03:52.001 CC lib/log/log_deprecated.o 00:03:52.001 CC lib/ut_mock/mock.o 00:03:52.001 LIB libspdk_ut.a 00:03:52.001 LIB libspdk_log.a 00:03:52.001 SO libspdk_ut.so.2.0 00:03:52.001 LIB libspdk_ut_mock.a 00:03:52.001 SO libspdk_log.so.7.0 00:03:52.001 SO libspdk_ut_mock.so.6.0 00:03:52.001 SYMLINK libspdk_ut.so 00:03:52.001 SYMLINK libspdk_ut_mock.so 00:03:52.001 SYMLINK libspdk_log.so 00:03:52.001 CC lib/dma/dma.o 00:03:52.001 CC lib/ioat/ioat.o 00:03:52.001 CXX lib/trace_parser/trace.o 00:03:52.001 CC lib/util/base64.o 00:03:52.001 CC lib/util/bit_array.o 00:03:52.001 CC lib/util/cpuset.o 00:03:52.001 CC lib/util/crc16.o 00:03:52.001 CC lib/util/crc32c.o 00:03:52.001 CC lib/util/crc32.o 00:03:52.001 CC lib/vfio_user/host/vfio_user_pci.o 00:03:52.001 CC lib/vfio_user/host/vfio_user.o 00:03:52.001 CC lib/util/crc32_ieee.o 00:03:52.001 CC lib/util/crc64.o 00:03:52.001 CC lib/util/dif.o 00:03:52.001 LIB libspdk_dma.a 00:03:52.001 CC lib/util/fd.o 00:03:52.001 CC lib/util/fd_group.o 00:03:52.001 SO libspdk_dma.so.4.0 00:03:52.001 CC lib/util/file.o 00:03:52.001 SYMLINK libspdk_dma.so 00:03:52.001 CC lib/util/hexlify.o 00:03:52.001 CC lib/util/iov.o 00:03:52.001 CC lib/util/math.o 00:03:52.001 CC lib/util/net.o 00:03:52.001 LIB libspdk_ioat.a 00:03:52.001 LIB libspdk_vfio_user.a 00:03:52.001 SO libspdk_ioat.so.7.0 00:03:52.001 SO libspdk_vfio_user.so.5.0 00:03:52.001 CC lib/util/pipe.o 00:03:52.001 CC lib/util/strerror_tls.o 00:03:52.001 SYMLINK libspdk_ioat.so 00:03:52.001 CC lib/util/string.o 00:03:52.001 CC lib/util/uuid.o 00:03:52.001 SYMLINK libspdk_vfio_user.so 00:03:52.001 CC lib/util/xor.o 00:03:52.001 CC lib/util/zipf.o 00:03:52.001 LIB libspdk_util.a 00:03:52.001 SO libspdk_util.so.10.0 00:03:52.001 LIB libspdk_trace_parser.a 00:03:52.001 SO libspdk_trace_parser.so.5.0 00:03:52.001 SYMLINK libspdk_util.so 00:03:52.001 SYMLINK libspdk_trace_parser.so 00:03:52.001 CC lib/json/json_parse.o 00:03:52.001 CC lib/json/json_util.o 00:03:52.001 CC lib/json/json_write.o 00:03:52.001 CC lib/rdma_utils/rdma_utils.o 00:03:52.001 CC lib/env_dpdk/env.o 00:03:52.001 CC lib/env_dpdk/memory.o 00:03:52.001 CC lib/conf/conf.o 00:03:52.001 CC lib/idxd/idxd.o 00:03:52.001 CC lib/rdma_provider/common.o 00:03:52.001 CC lib/vmd/vmd.o 00:03:52.001 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:52.001 LIB libspdk_conf.a 00:03:52.001 CC lib/vmd/led.o 00:03:52.001 SO libspdk_conf.so.6.0 00:03:52.001 CC lib/env_dpdk/pci.o 00:03:52.001 LIB libspdk_rdma_utils.a 00:03:52.001 SO libspdk_rdma_utils.so.1.0 00:03:52.001 LIB libspdk_json.a 00:03:52.001 SYMLINK libspdk_conf.so 00:03:52.001 CC lib/idxd/idxd_user.o 00:03:52.001 SO libspdk_json.so.6.0 00:03:52.001 SYMLINK libspdk_rdma_utils.so 00:03:52.001 CC lib/idxd/idxd_kernel.o 00:03:52.001 CC lib/env_dpdk/init.o 00:03:52.001 LIB libspdk_rdma_provider.a 00:03:52.001 CC lib/env_dpdk/threads.o 00:03:52.001 SYMLINK libspdk_json.so 00:03:52.001 SO libspdk_rdma_provider.so.6.0 00:03:52.001 SYMLINK libspdk_rdma_provider.so 00:03:52.001 CC lib/env_dpdk/pci_ioat.o 00:03:52.001 CC lib/env_dpdk/pci_virtio.o 00:03:52.001 CC lib/env_dpdk/pci_vmd.o 00:03:52.001 CC lib/env_dpdk/pci_idxd.o 00:03:52.001 CC lib/jsonrpc/jsonrpc_server.o 00:03:52.001 CC lib/env_dpdk/pci_event.o 00:03:52.001 CC lib/env_dpdk/sigbus_handler.o 00:03:52.001 CC lib/env_dpdk/pci_dpdk.o 00:03:52.001 LIB libspdk_idxd.a 00:03:52.001 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:52.001 SO libspdk_idxd.so.12.0 00:03:52.001 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:52.258 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:52.258 SYMLINK libspdk_idxd.so 00:03:52.258 CC lib/jsonrpc/jsonrpc_client.o 00:03:52.258 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:52.258 LIB libspdk_vmd.a 00:03:52.258 SO libspdk_vmd.so.6.0 00:03:52.258 SYMLINK libspdk_vmd.so 00:03:52.517 LIB libspdk_jsonrpc.a 00:03:52.517 SO libspdk_jsonrpc.so.6.0 00:03:52.517 SYMLINK libspdk_jsonrpc.so 00:03:52.775 CC lib/rpc/rpc.o 00:03:53.033 LIB libspdk_rpc.a 00:03:53.033 SO libspdk_rpc.so.6.0 00:03:53.291 SYMLINK libspdk_rpc.so 00:03:53.291 LIB libspdk_env_dpdk.a 00:03:53.291 SO libspdk_env_dpdk.so.15.0 00:03:53.291 CC lib/notify/notify.o 00:03:53.291 CC lib/notify/notify_rpc.o 00:03:53.291 CC lib/trace/trace_flags.o 00:03:53.549 CC lib/trace/trace.o 00:03:53.549 CC lib/keyring/keyring.o 00:03:53.549 CC lib/trace/trace_rpc.o 00:03:53.549 CC lib/keyring/keyring_rpc.o 00:03:53.549 SYMLINK libspdk_env_dpdk.so 00:03:53.549 LIB libspdk_notify.a 00:03:53.549 SO libspdk_notify.so.6.0 00:03:53.807 LIB libspdk_keyring.a 00:03:53.807 SYMLINK libspdk_notify.so 00:03:53.807 LIB libspdk_trace.a 00:03:53.807 SO libspdk_keyring.so.1.0 00:03:53.807 SO libspdk_trace.so.10.0 00:03:53.807 SYMLINK libspdk_keyring.so 00:03:53.807 SYMLINK libspdk_trace.so 00:03:54.065 CC lib/sock/sock_rpc.o 00:03:54.065 CC lib/sock/sock.o 00:03:54.065 CC lib/thread/iobuf.o 00:03:54.065 CC lib/thread/thread.o 00:03:54.631 LIB libspdk_sock.a 00:03:54.631 SO libspdk_sock.so.10.0 00:03:54.631 SYMLINK libspdk_sock.so 00:03:54.889 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:54.889 CC lib/nvme/nvme_ctrlr.o 00:03:54.889 CC lib/nvme/nvme_fabric.o 00:03:54.889 CC lib/nvme/nvme_ns_cmd.o 00:03:54.889 CC lib/nvme/nvme_pcie.o 00:03:54.889 CC lib/nvme/nvme_pcie_common.o 00:03:54.889 CC lib/nvme/nvme_ns.o 00:03:54.889 CC lib/nvme/nvme_qpair.o 00:03:54.889 CC lib/nvme/nvme.o 00:03:55.824 CC lib/nvme/nvme_quirks.o 00:03:55.824 CC lib/nvme/nvme_transport.o 00:03:55.824 CC lib/nvme/nvme_discovery.o 00:03:55.824 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:56.083 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:56.083 CC lib/nvme/nvme_tcp.o 00:03:56.083 LIB libspdk_thread.a 00:03:56.083 CC lib/nvme/nvme_opal.o 00:03:56.083 SO libspdk_thread.so.10.1 00:03:56.341 SYMLINK libspdk_thread.so 00:03:56.341 CC lib/nvme/nvme_io_msg.o 00:03:56.341 CC lib/nvme/nvme_poll_group.o 00:03:56.341 CC lib/nvme/nvme_zns.o 00:03:56.599 CC lib/nvme/nvme_stubs.o 00:03:56.857 CC lib/blob/blobstore.o 00:03:56.857 CC lib/accel/accel.o 00:03:56.857 CC lib/nvme/nvme_auth.o 00:03:56.857 CC lib/accel/accel_rpc.o 00:03:56.857 CC lib/init/json_config.o 00:03:57.114 CC lib/accel/accel_sw.o 00:03:57.114 CC lib/nvme/nvme_cuse.o 00:03:57.114 CC lib/init/subsystem.o 00:03:57.372 CC lib/init/subsystem_rpc.o 00:03:57.372 CC lib/init/rpc.o 00:03:57.372 CC lib/blob/request.o 00:03:57.372 CC lib/nvme/nvme_rdma.o 00:03:57.372 LIB libspdk_init.a 00:03:57.628 SO libspdk_init.so.5.0 00:03:57.628 CC lib/virtio/virtio.o 00:03:57.628 SYMLINK libspdk_init.so 00:03:57.628 CC lib/event/app.o 00:03:57.886 CC lib/virtio/virtio_vhost_user.o 00:03:57.886 CC lib/event/reactor.o 00:03:57.886 CC lib/blob/zeroes.o 00:03:57.886 CC lib/virtio/virtio_vfio_user.o 00:03:58.144 CC lib/virtio/virtio_pci.o 00:03:58.144 LIB libspdk_accel.a 00:03:58.144 SO libspdk_accel.so.16.0 00:03:58.144 CC lib/event/log_rpc.o 00:03:58.144 CC lib/blob/blob_bs_dev.o 00:03:58.144 SYMLINK libspdk_accel.so 00:03:58.144 CC lib/event/app_rpc.o 00:03:58.144 CC lib/event/scheduler_static.o 00:03:58.401 CC lib/bdev/bdev.o 00:03:58.401 CC lib/bdev/bdev_zone.o 00:03:58.401 CC lib/bdev/bdev_rpc.o 00:03:58.401 CC lib/bdev/part.o 00:03:58.401 CC lib/bdev/scsi_nvme.o 00:03:58.401 LIB libspdk_virtio.a 00:03:58.402 SO libspdk_virtio.so.7.0 00:03:58.402 LIB libspdk_event.a 00:03:58.402 SYMLINK libspdk_virtio.so 00:03:58.659 SO libspdk_event.so.14.0 00:03:58.659 SYMLINK libspdk_event.so 00:03:59.225 LIB libspdk_nvme.a 00:03:59.483 SO libspdk_nvme.so.13.1 00:03:59.742 SYMLINK libspdk_nvme.so 00:04:01.118 LIB libspdk_blob.a 00:04:01.118 SO libspdk_blob.so.11.0 00:04:01.118 SYMLINK libspdk_blob.so 00:04:01.386 CC lib/lvol/lvol.o 00:04:01.386 CC lib/blobfs/blobfs.o 00:04:01.386 CC lib/blobfs/tree.o 00:04:01.952 LIB libspdk_bdev.a 00:04:01.952 SO libspdk_bdev.so.16.0 00:04:01.952 SYMLINK libspdk_bdev.so 00:04:02.210 CC lib/ublk/ublk.o 00:04:02.210 CC lib/ublk/ublk_rpc.o 00:04:02.210 CC lib/ftl/ftl_core.o 00:04:02.210 CC lib/ftl/ftl_layout.o 00:04:02.210 CC lib/ftl/ftl_init.o 00:04:02.210 CC lib/nbd/nbd.o 00:04:02.210 CC lib/nvmf/ctrlr.o 00:04:02.210 CC lib/scsi/dev.o 00:04:02.468 CC lib/scsi/lun.o 00:04:02.468 CC lib/ftl/ftl_debug.o 00:04:02.468 CC lib/nvmf/ctrlr_discovery.o 00:04:02.726 LIB libspdk_blobfs.a 00:04:02.726 CC lib/nvmf/ctrlr_bdev.o 00:04:02.726 SO libspdk_blobfs.so.10.0 00:04:02.726 LIB libspdk_lvol.a 00:04:02.726 SO libspdk_lvol.so.10.0 00:04:02.726 CC lib/scsi/port.o 00:04:02.726 CC lib/ftl/ftl_io.o 00:04:02.726 SYMLINK libspdk_blobfs.so 00:04:02.726 CC lib/ftl/ftl_sb.o 00:04:02.726 SYMLINK libspdk_lvol.so 00:04:02.726 CC lib/scsi/scsi.o 00:04:02.726 CC lib/nbd/nbd_rpc.o 00:04:02.726 CC lib/scsi/scsi_bdev.o 00:04:02.984 CC lib/scsi/scsi_pr.o 00:04:02.984 CC lib/scsi/scsi_rpc.o 00:04:02.984 CC lib/scsi/task.o 00:04:02.984 LIB libspdk_nbd.a 00:04:02.984 SO libspdk_nbd.so.7.0 00:04:02.984 CC lib/ftl/ftl_l2p.o 00:04:02.984 LIB libspdk_ublk.a 00:04:02.984 SO libspdk_ublk.so.3.0 00:04:03.242 CC lib/nvmf/subsystem.o 00:04:03.242 SYMLINK libspdk_nbd.so 00:04:03.242 CC lib/nvmf/nvmf.o 00:04:03.242 SYMLINK libspdk_ublk.so 00:04:03.242 CC lib/nvmf/nvmf_rpc.o 00:04:03.242 CC lib/nvmf/transport.o 00:04:03.242 CC lib/nvmf/tcp.o 00:04:03.242 CC lib/ftl/ftl_l2p_flat.o 00:04:03.242 CC lib/ftl/ftl_nv_cache.o 00:04:03.501 LIB libspdk_scsi.a 00:04:03.501 CC lib/ftl/ftl_band.o 00:04:03.501 CC lib/nvmf/stubs.o 00:04:03.501 SO libspdk_scsi.so.9.0 00:04:03.759 SYMLINK libspdk_scsi.so 00:04:03.759 CC lib/nvmf/mdns_server.o 00:04:04.017 CC lib/nvmf/rdma.o 00:04:04.017 CC lib/nvmf/auth.o 00:04:04.275 CC lib/ftl/ftl_band_ops.o 00:04:04.275 CC lib/ftl/ftl_writer.o 00:04:04.275 CC lib/iscsi/conn.o 00:04:04.533 CC lib/vhost/vhost.o 00:04:04.533 CC lib/ftl/ftl_rq.o 00:04:04.533 CC lib/ftl/ftl_reloc.o 00:04:04.791 CC lib/ftl/ftl_l2p_cache.o 00:04:04.791 CC lib/iscsi/init_grp.o 00:04:04.791 CC lib/iscsi/iscsi.o 00:04:04.791 CC lib/iscsi/md5.o 00:04:05.049 CC lib/vhost/vhost_rpc.o 00:04:05.050 CC lib/iscsi/param.o 00:04:05.050 CC lib/iscsi/portal_grp.o 00:04:05.050 CC lib/iscsi/tgt_node.o 00:04:05.308 CC lib/iscsi/iscsi_subsystem.o 00:04:05.308 CC lib/iscsi/iscsi_rpc.o 00:04:05.308 CC lib/ftl/ftl_p2l.o 00:04:05.308 CC lib/iscsi/task.o 00:04:05.308 CC lib/ftl/mngt/ftl_mngt.o 00:04:05.566 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:05.566 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:05.824 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:05.824 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:05.824 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:05.824 CC lib/vhost/vhost_scsi.o 00:04:05.824 CC lib/vhost/vhost_blk.o 00:04:05.824 CC lib/vhost/rte_vhost_user.o 00:04:05.824 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:05.824 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:06.083 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:06.083 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:06.083 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:06.083 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:06.083 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:06.341 CC lib/ftl/utils/ftl_conf.o 00:04:06.341 CC lib/ftl/utils/ftl_md.o 00:04:06.341 CC lib/ftl/utils/ftl_mempool.o 00:04:06.341 CC lib/ftl/utils/ftl_bitmap.o 00:04:06.599 CC lib/ftl/utils/ftl_property.o 00:04:06.599 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:06.599 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:06.599 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:06.599 LIB libspdk_iscsi.a 00:04:06.857 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:06.857 SO libspdk_iscsi.so.8.0 00:04:06.857 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:06.857 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:06.857 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:06.857 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:06.857 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:07.115 SYMLINK libspdk_iscsi.so 00:04:07.115 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:07.115 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:07.115 LIB libspdk_nvmf.a 00:04:07.115 CC lib/ftl/base/ftl_base_dev.o 00:04:07.115 CC lib/ftl/base/ftl_base_bdev.o 00:04:07.115 CC lib/ftl/ftl_trace.o 00:04:07.115 LIB libspdk_vhost.a 00:04:07.115 SO libspdk_nvmf.so.19.0 00:04:07.115 SO libspdk_vhost.so.8.0 00:04:07.373 SYMLINK libspdk_vhost.so 00:04:07.373 LIB libspdk_ftl.a 00:04:07.632 SYMLINK libspdk_nvmf.so 00:04:07.632 SO libspdk_ftl.so.9.0 00:04:07.890 SYMLINK libspdk_ftl.so 00:04:08.456 CC module/env_dpdk/env_dpdk_rpc.o 00:04:08.456 CC module/accel/error/accel_error.o 00:04:08.456 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:08.456 CC module/keyring/file/keyring.o 00:04:08.456 CC module/sock/posix/posix.o 00:04:08.456 CC module/accel/dsa/accel_dsa.o 00:04:08.456 CC module/blob/bdev/blob_bdev.o 00:04:08.456 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:08.456 CC module/scheduler/gscheduler/gscheduler.o 00:04:08.456 CC module/accel/ioat/accel_ioat.o 00:04:08.456 LIB libspdk_env_dpdk_rpc.a 00:04:08.456 SO libspdk_env_dpdk_rpc.so.6.0 00:04:08.456 SYMLINK libspdk_env_dpdk_rpc.so 00:04:08.456 CC module/accel/ioat/accel_ioat_rpc.o 00:04:08.456 CC module/keyring/file/keyring_rpc.o 00:04:08.456 LIB libspdk_scheduler_gscheduler.a 00:04:08.456 LIB libspdk_scheduler_dpdk_governor.a 00:04:08.456 CC module/accel/error/accel_error_rpc.o 00:04:08.456 SO libspdk_scheduler_gscheduler.so.4.0 00:04:08.714 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:08.714 LIB libspdk_scheduler_dynamic.a 00:04:08.714 SO libspdk_scheduler_dynamic.so.4.0 00:04:08.714 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:08.714 SYMLINK libspdk_scheduler_gscheduler.so 00:04:08.714 CC module/accel/dsa/accel_dsa_rpc.o 00:04:08.714 LIB libspdk_keyring_file.a 00:04:08.714 LIB libspdk_accel_ioat.a 00:04:08.714 SYMLINK libspdk_scheduler_dynamic.so 00:04:08.714 LIB libspdk_blob_bdev.a 00:04:08.714 SO libspdk_accel_ioat.so.6.0 00:04:08.714 SO libspdk_keyring_file.so.1.0 00:04:08.714 LIB libspdk_accel_error.a 00:04:08.714 SO libspdk_blob_bdev.so.11.0 00:04:08.714 SYMLINK libspdk_keyring_file.so 00:04:08.714 SYMLINK libspdk_accel_ioat.so 00:04:08.714 SO libspdk_accel_error.so.2.0 00:04:08.714 LIB libspdk_accel_dsa.a 00:04:08.714 SYMLINK libspdk_blob_bdev.so 00:04:08.714 CC module/keyring/linux/keyring.o 00:04:08.714 CC module/keyring/linux/keyring_rpc.o 00:04:08.714 SO libspdk_accel_dsa.so.5.0 00:04:08.972 CC module/accel/iaa/accel_iaa.o 00:04:08.972 CC module/accel/iaa/accel_iaa_rpc.o 00:04:08.972 SYMLINK libspdk_accel_error.so 00:04:08.972 SYMLINK libspdk_accel_dsa.so 00:04:08.972 LIB libspdk_keyring_linux.a 00:04:08.972 SO libspdk_keyring_linux.so.1.0 00:04:08.972 CC module/bdev/delay/vbdev_delay.o 00:04:08.972 CC module/bdev/error/vbdev_error.o 00:04:08.972 CC module/bdev/gpt/gpt.o 00:04:08.972 LIB libspdk_accel_iaa.a 00:04:08.972 CC module/blobfs/bdev/blobfs_bdev.o 00:04:09.229 SYMLINK libspdk_keyring_linux.so 00:04:09.229 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:09.229 CC module/bdev/lvol/vbdev_lvol.o 00:04:09.229 SO libspdk_accel_iaa.so.3.0 00:04:09.229 CC module/bdev/malloc/bdev_malloc.o 00:04:09.229 CC module/bdev/null/bdev_null.o 00:04:09.229 SYMLINK libspdk_accel_iaa.so 00:04:09.229 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:09.229 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:09.229 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:09.229 CC module/bdev/gpt/vbdev_gpt.o 00:04:09.229 LIB libspdk_sock_posix.a 00:04:09.487 SO libspdk_sock_posix.so.6.0 00:04:09.487 CC module/bdev/error/vbdev_error_rpc.o 00:04:09.487 CC module/bdev/null/bdev_null_rpc.o 00:04:09.487 SYMLINK libspdk_sock_posix.so 00:04:09.487 LIB libspdk_blobfs_bdev.a 00:04:09.487 SO libspdk_blobfs_bdev.so.6.0 00:04:09.487 LIB libspdk_bdev_delay.a 00:04:09.487 SO libspdk_bdev_delay.so.6.0 00:04:09.487 LIB libspdk_bdev_error.a 00:04:09.487 SYMLINK libspdk_blobfs_bdev.so 00:04:09.487 LIB libspdk_bdev_null.a 00:04:09.745 SO libspdk_bdev_error.so.6.0 00:04:09.745 LIB libspdk_bdev_gpt.a 00:04:09.745 SO libspdk_bdev_null.so.6.0 00:04:09.745 SYMLINK libspdk_bdev_delay.so 00:04:09.745 LIB libspdk_bdev_malloc.a 00:04:09.745 CC module/bdev/nvme/bdev_nvme.o 00:04:09.745 SO libspdk_bdev_gpt.so.6.0 00:04:09.745 CC module/bdev/passthru/vbdev_passthru.o 00:04:09.745 SO libspdk_bdev_malloc.so.6.0 00:04:09.745 SYMLINK libspdk_bdev_error.so 00:04:09.745 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:09.745 SYMLINK libspdk_bdev_null.so 00:04:09.745 CC module/bdev/nvme/nvme_rpc.o 00:04:09.745 LIB libspdk_bdev_lvol.a 00:04:09.745 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:09.745 SYMLINK libspdk_bdev_gpt.so 00:04:09.745 SYMLINK libspdk_bdev_malloc.so 00:04:09.745 CC module/bdev/raid/bdev_raid.o 00:04:09.745 SO libspdk_bdev_lvol.so.6.0 00:04:09.745 CC module/bdev/split/vbdev_split.o 00:04:10.003 SYMLINK libspdk_bdev_lvol.so 00:04:10.003 CC module/bdev/split/vbdev_split_rpc.o 00:04:10.003 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:10.003 CC module/bdev/xnvme/bdev_xnvme.o 00:04:10.003 CC module/bdev/nvme/bdev_mdns_client.o 00:04:10.003 CC module/bdev/nvme/vbdev_opal.o 00:04:10.003 LIB libspdk_bdev_passthru.a 00:04:10.003 CC module/bdev/raid/bdev_raid_rpc.o 00:04:10.003 SO libspdk_bdev_passthru.so.6.0 00:04:10.003 LIB libspdk_bdev_split.a 00:04:10.260 SO libspdk_bdev_split.so.6.0 00:04:10.260 CC module/bdev/raid/bdev_raid_sb.o 00:04:10.260 SYMLINK libspdk_bdev_passthru.so 00:04:10.260 CC module/bdev/raid/raid0.o 00:04:10.260 SYMLINK libspdk_bdev_split.so 00:04:10.260 CC module/bdev/raid/raid1.o 00:04:10.260 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:10.260 CC module/bdev/raid/concat.o 00:04:10.260 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:10.517 LIB libspdk_bdev_xnvme.a 00:04:10.517 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:10.517 SO libspdk_bdev_xnvme.so.3.0 00:04:10.517 CC module/bdev/aio/bdev_aio.o 00:04:10.517 LIB libspdk_bdev_zone_block.a 00:04:10.517 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:10.517 SYMLINK libspdk_bdev_xnvme.so 00:04:10.517 CC module/bdev/aio/bdev_aio_rpc.o 00:04:10.517 SO libspdk_bdev_zone_block.so.6.0 00:04:10.775 SYMLINK libspdk_bdev_zone_block.so 00:04:10.775 CC module/bdev/ftl/bdev_ftl.o 00:04:10.775 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:10.775 CC module/bdev/iscsi/bdev_iscsi.o 00:04:10.775 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:10.775 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:10.775 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:10.775 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:11.044 LIB libspdk_bdev_aio.a 00:04:11.044 SO libspdk_bdev_aio.so.6.0 00:04:11.044 SYMLINK libspdk_bdev_aio.so 00:04:11.044 LIB libspdk_bdev_ftl.a 00:04:11.044 SO libspdk_bdev_ftl.so.6.0 00:04:11.044 LIB libspdk_bdev_raid.a 00:04:11.306 SYMLINK libspdk_bdev_ftl.so 00:04:11.306 SO libspdk_bdev_raid.so.6.0 00:04:11.306 LIB libspdk_bdev_iscsi.a 00:04:11.306 SO libspdk_bdev_iscsi.so.6.0 00:04:11.306 SYMLINK libspdk_bdev_raid.so 00:04:11.306 SYMLINK libspdk_bdev_iscsi.so 00:04:11.565 LIB libspdk_bdev_virtio.a 00:04:11.565 SO libspdk_bdev_virtio.so.6.0 00:04:11.565 SYMLINK libspdk_bdev_virtio.so 00:04:12.514 LIB libspdk_bdev_nvme.a 00:04:12.772 SO libspdk_bdev_nvme.so.7.0 00:04:12.772 SYMLINK libspdk_bdev_nvme.so 00:04:13.339 CC module/event/subsystems/vmd/vmd.o 00:04:13.339 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:13.339 CC module/event/subsystems/sock/sock.o 00:04:13.339 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:13.339 CC module/event/subsystems/iobuf/iobuf.o 00:04:13.339 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:13.339 CC module/event/subsystems/scheduler/scheduler.o 00:04:13.339 CC module/event/subsystems/keyring/keyring.o 00:04:13.339 LIB libspdk_event_vhost_blk.a 00:04:13.339 LIB libspdk_event_sock.a 00:04:13.339 LIB libspdk_event_vmd.a 00:04:13.596 LIB libspdk_event_keyring.a 00:04:13.596 LIB libspdk_event_scheduler.a 00:04:13.596 LIB libspdk_event_iobuf.a 00:04:13.596 SO libspdk_event_keyring.so.1.0 00:04:13.596 SO libspdk_event_sock.so.5.0 00:04:13.596 SO libspdk_event_vhost_blk.so.3.0 00:04:13.596 SO libspdk_event_vmd.so.6.0 00:04:13.596 SO libspdk_event_scheduler.so.4.0 00:04:13.596 SO libspdk_event_iobuf.so.3.0 00:04:13.596 SYMLINK libspdk_event_vhost_blk.so 00:04:13.596 SYMLINK libspdk_event_sock.so 00:04:13.596 SYMLINK libspdk_event_keyring.so 00:04:13.596 SYMLINK libspdk_event_vmd.so 00:04:13.596 SYMLINK libspdk_event_scheduler.so 00:04:13.596 SYMLINK libspdk_event_iobuf.so 00:04:13.855 CC module/event/subsystems/accel/accel.o 00:04:14.114 LIB libspdk_event_accel.a 00:04:14.114 SO libspdk_event_accel.so.6.0 00:04:14.114 SYMLINK libspdk_event_accel.so 00:04:14.372 CC module/event/subsystems/bdev/bdev.o 00:04:14.631 LIB libspdk_event_bdev.a 00:04:14.631 SO libspdk_event_bdev.so.6.0 00:04:14.631 SYMLINK libspdk_event_bdev.so 00:04:14.889 CC module/event/subsystems/scsi/scsi.o 00:04:14.889 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:14.889 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:14.889 CC module/event/subsystems/ublk/ublk.o 00:04:14.889 CC module/event/subsystems/nbd/nbd.o 00:04:15.147 LIB libspdk_event_ublk.a 00:04:15.147 LIB libspdk_event_nbd.a 00:04:15.147 LIB libspdk_event_scsi.a 00:04:15.147 SO libspdk_event_ublk.so.3.0 00:04:15.147 SO libspdk_event_scsi.so.6.0 00:04:15.147 SO libspdk_event_nbd.so.6.0 00:04:15.147 SYMLINK libspdk_event_ublk.so 00:04:15.147 SYMLINK libspdk_event_nbd.so 00:04:15.147 SYMLINK libspdk_event_scsi.so 00:04:15.147 LIB libspdk_event_nvmf.a 00:04:15.406 SO libspdk_event_nvmf.so.6.0 00:04:15.406 SYMLINK libspdk_event_nvmf.so 00:04:15.406 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:15.406 CC module/event/subsystems/iscsi/iscsi.o 00:04:15.664 LIB libspdk_event_vhost_scsi.a 00:04:15.664 LIB libspdk_event_iscsi.a 00:04:15.664 SO libspdk_event_vhost_scsi.so.3.0 00:04:15.664 SO libspdk_event_iscsi.so.6.0 00:04:15.664 SYMLINK libspdk_event_vhost_scsi.so 00:04:15.664 SYMLINK libspdk_event_iscsi.so 00:04:15.922 SO libspdk.so.6.0 00:04:15.922 SYMLINK libspdk.so 00:04:16.180 CC app/trace_record/trace_record.o 00:04:16.180 CXX app/trace/trace.o 00:04:16.180 CC app/spdk_nvme_perf/perf.o 00:04:16.181 CC app/spdk_lspci/spdk_lspci.o 00:04:16.181 CC app/nvmf_tgt/nvmf_main.o 00:04:16.181 CC app/iscsi_tgt/iscsi_tgt.o 00:04:16.181 CC examples/ioat/perf/perf.o 00:04:16.181 CC test/thread/poller_perf/poller_perf.o 00:04:16.181 CC app/spdk_tgt/spdk_tgt.o 00:04:16.439 CC examples/util/zipf/zipf.o 00:04:16.439 LINK spdk_lspci 00:04:16.439 LINK nvmf_tgt 00:04:16.439 LINK poller_perf 00:04:16.439 LINK zipf 00:04:16.439 LINK spdk_trace_record 00:04:16.439 LINK iscsi_tgt 00:04:16.439 LINK spdk_tgt 00:04:16.696 LINK ioat_perf 00:04:16.696 CC app/spdk_nvme_identify/identify.o 00:04:16.696 LINK spdk_trace 00:04:16.696 TEST_HEADER include/spdk/accel.h 00:04:16.696 TEST_HEADER include/spdk/accel_module.h 00:04:16.696 TEST_HEADER include/spdk/assert.h 00:04:16.696 TEST_HEADER include/spdk/barrier.h 00:04:16.696 TEST_HEADER include/spdk/base64.h 00:04:16.696 TEST_HEADER include/spdk/bdev.h 00:04:16.696 TEST_HEADER include/spdk/bdev_module.h 00:04:16.696 TEST_HEADER include/spdk/bdev_zone.h 00:04:16.696 TEST_HEADER include/spdk/bit_array.h 00:04:16.696 TEST_HEADER include/spdk/bit_pool.h 00:04:16.696 TEST_HEADER include/spdk/blob_bdev.h 00:04:16.696 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:16.696 TEST_HEADER include/spdk/blobfs.h 00:04:16.696 TEST_HEADER include/spdk/blob.h 00:04:16.696 TEST_HEADER include/spdk/conf.h 00:04:16.696 TEST_HEADER include/spdk/config.h 00:04:16.696 TEST_HEADER include/spdk/cpuset.h 00:04:16.696 CC examples/ioat/verify/verify.o 00:04:16.696 TEST_HEADER include/spdk/crc16.h 00:04:16.954 TEST_HEADER include/spdk/crc32.h 00:04:16.954 TEST_HEADER include/spdk/crc64.h 00:04:16.954 TEST_HEADER include/spdk/dif.h 00:04:16.954 CC app/spdk_nvme_discover/discovery_aer.o 00:04:16.954 TEST_HEADER include/spdk/dma.h 00:04:16.954 TEST_HEADER include/spdk/endian.h 00:04:16.954 TEST_HEADER include/spdk/env_dpdk.h 00:04:16.954 TEST_HEADER include/spdk/env.h 00:04:16.954 TEST_HEADER include/spdk/event.h 00:04:16.954 TEST_HEADER include/spdk/fd_group.h 00:04:16.954 TEST_HEADER include/spdk/fd.h 00:04:16.954 TEST_HEADER include/spdk/file.h 00:04:16.954 TEST_HEADER include/spdk/ftl.h 00:04:16.954 TEST_HEADER include/spdk/gpt_spec.h 00:04:16.954 TEST_HEADER include/spdk/hexlify.h 00:04:16.954 TEST_HEADER include/spdk/histogram_data.h 00:04:16.954 TEST_HEADER include/spdk/idxd.h 00:04:16.954 TEST_HEADER include/spdk/idxd_spec.h 00:04:16.954 TEST_HEADER include/spdk/init.h 00:04:16.954 TEST_HEADER include/spdk/ioat.h 00:04:16.954 TEST_HEADER include/spdk/ioat_spec.h 00:04:16.954 TEST_HEADER include/spdk/iscsi_spec.h 00:04:16.954 TEST_HEADER include/spdk/json.h 00:04:16.954 TEST_HEADER include/spdk/jsonrpc.h 00:04:16.954 TEST_HEADER include/spdk/keyring.h 00:04:16.954 TEST_HEADER include/spdk/keyring_module.h 00:04:16.954 TEST_HEADER include/spdk/likely.h 00:04:16.954 CC test/dma/test_dma/test_dma.o 00:04:16.954 TEST_HEADER include/spdk/log.h 00:04:16.954 TEST_HEADER include/spdk/lvol.h 00:04:16.954 TEST_HEADER include/spdk/memory.h 00:04:16.954 TEST_HEADER include/spdk/mmio.h 00:04:16.954 TEST_HEADER include/spdk/nbd.h 00:04:16.954 TEST_HEADER include/spdk/net.h 00:04:16.954 TEST_HEADER include/spdk/notify.h 00:04:16.954 TEST_HEADER include/spdk/nvme.h 00:04:16.954 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:16.954 TEST_HEADER include/spdk/nvme_intel.h 00:04:16.954 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:16.954 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:16.954 CC test/app/bdev_svc/bdev_svc.o 00:04:16.954 TEST_HEADER include/spdk/nvme_spec.h 00:04:16.954 TEST_HEADER include/spdk/nvme_zns.h 00:04:16.954 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:16.954 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:16.954 TEST_HEADER include/spdk/nvmf.h 00:04:16.954 TEST_HEADER include/spdk/nvmf_spec.h 00:04:16.954 TEST_HEADER include/spdk/nvmf_transport.h 00:04:16.954 TEST_HEADER include/spdk/opal.h 00:04:16.954 TEST_HEADER include/spdk/opal_spec.h 00:04:16.954 TEST_HEADER include/spdk/pci_ids.h 00:04:16.954 TEST_HEADER include/spdk/pipe.h 00:04:16.954 CC test/env/mem_callbacks/mem_callbacks.o 00:04:16.954 TEST_HEADER include/spdk/queue.h 00:04:16.954 TEST_HEADER include/spdk/reduce.h 00:04:16.954 TEST_HEADER include/spdk/rpc.h 00:04:16.954 TEST_HEADER include/spdk/scheduler.h 00:04:16.954 TEST_HEADER include/spdk/scsi.h 00:04:16.954 TEST_HEADER include/spdk/scsi_spec.h 00:04:16.954 TEST_HEADER include/spdk/sock.h 00:04:16.954 TEST_HEADER include/spdk/stdinc.h 00:04:16.954 TEST_HEADER include/spdk/string.h 00:04:16.954 TEST_HEADER include/spdk/thread.h 00:04:16.954 TEST_HEADER include/spdk/trace.h 00:04:16.954 TEST_HEADER include/spdk/trace_parser.h 00:04:16.954 TEST_HEADER include/spdk/tree.h 00:04:16.954 TEST_HEADER include/spdk/ublk.h 00:04:16.954 TEST_HEADER include/spdk/util.h 00:04:16.954 TEST_HEADER include/spdk/uuid.h 00:04:16.954 TEST_HEADER include/spdk/version.h 00:04:16.954 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:16.954 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:16.954 TEST_HEADER include/spdk/vhost.h 00:04:16.954 TEST_HEADER include/spdk/vmd.h 00:04:16.954 TEST_HEADER include/spdk/xor.h 00:04:16.954 TEST_HEADER include/spdk/zipf.h 00:04:16.954 CXX test/cpp_headers/accel.o 00:04:16.954 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:16.954 LINK spdk_nvme_discover 00:04:16.954 LINK verify 00:04:17.212 LINK bdev_svc 00:04:17.213 LINK interrupt_tgt 00:04:17.213 LINK mem_callbacks 00:04:17.213 CXX test/cpp_headers/accel_module.o 00:04:17.213 LINK spdk_nvme_perf 00:04:17.213 CC test/app/histogram_perf/histogram_perf.o 00:04:17.471 CC test/app/jsoncat/jsoncat.o 00:04:17.471 LINK test_dma 00:04:17.471 CXX test/cpp_headers/assert.o 00:04:17.471 CC test/env/vtophys/vtophys.o 00:04:17.471 LINK histogram_perf 00:04:17.471 LINK jsoncat 00:04:17.471 CC examples/thread/thread/thread_ex.o 00:04:17.471 CC examples/sock/hello_world/hello_sock.o 00:04:17.471 CXX test/cpp_headers/barrier.o 00:04:17.471 LINK vtophys 00:04:17.471 LINK nvme_fuzz 00:04:17.729 CC examples/vmd/lsvmd/lsvmd.o 00:04:17.729 LINK spdk_nvme_identify 00:04:17.729 CXX test/cpp_headers/base64.o 00:04:17.729 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:17.729 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:17.729 CC examples/idxd/perf/perf.o 00:04:17.729 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:17.729 LINK thread 00:04:17.729 CC test/app/stub/stub.o 00:04:17.729 LINK hello_sock 00:04:17.729 LINK lsvmd 00:04:17.987 CXX test/cpp_headers/bdev.o 00:04:17.987 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:17.987 CC app/spdk_top/spdk_top.o 00:04:17.987 LINK env_dpdk_post_init 00:04:17.987 LINK stub 00:04:17.987 CC examples/vmd/led/led.o 00:04:17.987 CC test/env/memory/memory_ut.o 00:04:17.987 CXX test/cpp_headers/bdev_module.o 00:04:17.987 CC test/env/pci/pci_ut.o 00:04:18.245 CXX test/cpp_headers/bdev_zone.o 00:04:18.245 LINK idxd_perf 00:04:18.245 LINK led 00:04:18.503 CXX test/cpp_headers/bit_array.o 00:04:18.503 CXX test/cpp_headers/bit_pool.o 00:04:18.503 LINK vhost_fuzz 00:04:18.503 CC app/vhost/vhost.o 00:04:18.503 CC test/event/event_perf/event_perf.o 00:04:18.503 CC examples/nvme/hello_world/hello_world.o 00:04:18.503 CXX test/cpp_headers/blob_bdev.o 00:04:18.503 LINK pci_ut 00:04:18.761 CC test/event/reactor/reactor.o 00:04:18.761 CC test/event/reactor_perf/reactor_perf.o 00:04:18.761 LINK event_perf 00:04:18.761 LINK vhost 00:04:18.761 LINK reactor 00:04:18.761 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.761 LINK hello_world 00:04:18.761 CXX test/cpp_headers/blobfs.o 00:04:18.761 LINK reactor_perf 00:04:18.761 CXX test/cpp_headers/blob.o 00:04:19.020 CXX test/cpp_headers/conf.o 00:04:19.020 CXX test/cpp_headers/config.o 00:04:19.020 CXX test/cpp_headers/cpuset.o 00:04:19.020 LINK memory_ut 00:04:19.020 CXX test/cpp_headers/crc16.o 00:04:19.020 CXX test/cpp_headers/crc32.o 00:04:19.020 CC test/event/app_repeat/app_repeat.o 00:04:19.020 CC examples/nvme/reconnect/reconnect.o 00:04:19.020 LINK spdk_top 00:04:19.278 CXX test/cpp_headers/crc64.o 00:04:19.278 CC examples/accel/perf/accel_perf.o 00:04:19.278 CC examples/blob/hello_world/hello_blob.o 00:04:19.278 CXX test/cpp_headers/dif.o 00:04:19.278 LINK app_repeat 00:04:19.278 CXX test/cpp_headers/dma.o 00:04:19.278 CC test/event/scheduler/scheduler.o 00:04:19.278 CC app/spdk_dd/spdk_dd.o 00:04:19.278 CXX test/cpp_headers/endian.o 00:04:19.537 CXX test/cpp_headers/env_dpdk.o 00:04:19.537 CXX test/cpp_headers/env.o 00:04:19.537 LINK hello_blob 00:04:19.537 LINK reconnect 00:04:19.537 LINK scheduler 00:04:19.537 CC test/rpc_client/rpc_client_test.o 00:04:19.537 CC test/nvme/aer/aer.o 00:04:19.537 CXX test/cpp_headers/event.o 00:04:19.795 CC test/nvme/reset/reset.o 00:04:19.795 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:19.795 LINK spdk_dd 00:04:19.795 LINK accel_perf 00:04:19.795 CXX test/cpp_headers/fd_group.o 00:04:19.795 LINK rpc_client_test 00:04:19.795 CC examples/blob/cli/blobcli.o 00:04:19.795 CC examples/nvme/arbitration/arbitration.o 00:04:19.795 LINK iscsi_fuzz 00:04:20.054 LINK aer 00:04:20.054 CXX test/cpp_headers/fd.o 00:04:20.054 LINK reset 00:04:20.054 CC test/nvme/sgl/sgl.o 00:04:20.054 CXX test/cpp_headers/file.o 00:04:20.312 CC app/fio/nvme/fio_plugin.o 00:04:20.312 CC test/accel/dif/dif.o 00:04:20.312 LINK arbitration 00:04:20.312 CC test/nvme/e2edp/nvme_dp.o 00:04:20.312 CXX test/cpp_headers/ftl.o 00:04:20.312 CC test/blobfs/mkfs/mkfs.o 00:04:20.312 CC test/lvol/esnap/esnap.o 00:04:20.312 LINK nvme_manage 00:04:20.312 LINK sgl 00:04:20.312 LINK blobcli 00:04:20.570 CC examples/nvme/hotplug/hotplug.o 00:04:20.570 CXX test/cpp_headers/gpt_spec.o 00:04:20.570 LINK mkfs 00:04:20.570 LINK nvme_dp 00:04:20.570 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:20.828 CXX test/cpp_headers/hexlify.o 00:04:20.828 CC app/fio/bdev/fio_plugin.o 00:04:20.828 LINK dif 00:04:20.828 LINK hotplug 00:04:20.828 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.828 CC test/nvme/overhead/overhead.o 00:04:20.828 LINK spdk_nvme 00:04:20.828 LINK cmb_copy 00:04:20.828 CXX test/cpp_headers/histogram_data.o 00:04:20.828 CC examples/bdev/bdevperf/bdevperf.o 00:04:21.086 CC test/nvme/err_injection/err_injection.o 00:04:21.086 CC test/nvme/startup/startup.o 00:04:21.086 CXX test/cpp_headers/idxd.o 00:04:21.086 CC test/nvme/reserve/reserve.o 00:04:21.086 LINK hello_bdev 00:04:21.086 CC examples/nvme/abort/abort.o 00:04:21.344 LINK overhead 00:04:21.344 CXX test/cpp_headers/idxd_spec.o 00:04:21.344 LINK startup 00:04:21.344 LINK err_injection 00:04:21.344 LINK spdk_bdev 00:04:21.344 LINK reserve 00:04:21.344 CC test/nvme/simple_copy/simple_copy.o 00:04:21.344 CXX test/cpp_headers/init.o 00:04:21.603 CC test/nvme/connect_stress/connect_stress.o 00:04:21.603 CC test/nvme/boot_partition/boot_partition.o 00:04:21.603 CC test/nvme/compliance/nvme_compliance.o 00:04:21.603 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.603 LINK abort 00:04:21.603 CXX test/cpp_headers/ioat.o 00:04:21.603 CC test/bdev/bdevio/bdevio.o 00:04:21.603 LINK connect_stress 00:04:21.603 LINK boot_partition 00:04:21.603 LINK simple_copy 00:04:21.871 CXX test/cpp_headers/ioat_spec.o 00:04:21.871 CXX test/cpp_headers/iscsi_spec.o 00:04:21.871 LINK pmr_persistence 00:04:21.871 LINK bdevperf 00:04:21.871 CXX test/cpp_headers/json.o 00:04:21.871 CXX test/cpp_headers/jsonrpc.o 00:04:21.871 CXX test/cpp_headers/keyring.o 00:04:21.871 CC test/nvme/fused_ordering/fused_ordering.o 00:04:21.871 LINK nvme_compliance 00:04:21.871 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:22.130 CC test/nvme/fdp/fdp.o 00:04:22.130 CXX test/cpp_headers/keyring_module.o 00:04:22.130 LINK bdevio 00:04:22.130 CXX test/cpp_headers/likely.o 00:04:22.130 CXX test/cpp_headers/log.o 00:04:22.130 LINK fused_ordering 00:04:22.130 CC test/nvme/cuse/cuse.o 00:04:22.130 LINK doorbell_aers 00:04:22.130 CXX test/cpp_headers/lvol.o 00:04:22.388 CC examples/nvmf/nvmf/nvmf.o 00:04:22.388 CXX test/cpp_headers/memory.o 00:04:22.388 CXX test/cpp_headers/mmio.o 00:04:22.388 CXX test/cpp_headers/nbd.o 00:04:22.388 CXX test/cpp_headers/net.o 00:04:22.388 CXX test/cpp_headers/notify.o 00:04:22.388 CXX test/cpp_headers/nvme.o 00:04:22.388 CXX test/cpp_headers/nvme_intel.o 00:04:22.388 LINK fdp 00:04:22.388 CXX test/cpp_headers/nvme_ocssd.o 00:04:22.388 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:22.646 CXX test/cpp_headers/nvme_spec.o 00:04:22.646 CXX test/cpp_headers/nvme_zns.o 00:04:22.646 CXX test/cpp_headers/nvmf_cmd.o 00:04:22.646 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:22.646 LINK nvmf 00:04:22.646 CXX test/cpp_headers/nvmf.o 00:04:22.646 CXX test/cpp_headers/nvmf_spec.o 00:04:22.646 CXX test/cpp_headers/nvmf_transport.o 00:04:22.646 CXX test/cpp_headers/opal.o 00:04:22.904 CXX test/cpp_headers/opal_spec.o 00:04:22.904 CXX test/cpp_headers/pci_ids.o 00:04:22.904 CXX test/cpp_headers/pipe.o 00:04:22.904 CXX test/cpp_headers/queue.o 00:04:22.904 CXX test/cpp_headers/reduce.o 00:04:22.904 CXX test/cpp_headers/rpc.o 00:04:22.904 CXX test/cpp_headers/scheduler.o 00:04:22.904 CXX test/cpp_headers/scsi.o 00:04:22.904 CXX test/cpp_headers/scsi_spec.o 00:04:22.904 CXX test/cpp_headers/sock.o 00:04:22.904 CXX test/cpp_headers/stdinc.o 00:04:22.904 CXX test/cpp_headers/string.o 00:04:22.904 CXX test/cpp_headers/thread.o 00:04:23.162 CXX test/cpp_headers/trace.o 00:04:23.162 CXX test/cpp_headers/trace_parser.o 00:04:23.162 CXX test/cpp_headers/tree.o 00:04:23.162 CXX test/cpp_headers/ublk.o 00:04:23.162 CXX test/cpp_headers/util.o 00:04:23.162 CXX test/cpp_headers/uuid.o 00:04:23.162 CXX test/cpp_headers/version.o 00:04:23.162 CXX test/cpp_headers/vfio_user_pci.o 00:04:23.162 CXX test/cpp_headers/vfio_user_spec.o 00:04:23.162 CXX test/cpp_headers/vhost.o 00:04:23.162 CXX test/cpp_headers/vmd.o 00:04:23.162 CXX test/cpp_headers/xor.o 00:04:23.421 CXX test/cpp_headers/zipf.o 00:04:23.680 LINK cuse 00:04:26.981 LINK esnap 00:04:26.981 ************************************ 00:04:26.981 END TEST make 00:04:26.981 ************************************ 00:04:26.981 00:04:26.981 real 1m3.850s 00:04:26.981 user 5m55.035s 00:04:26.981 sys 1m8.108s 00:04:26.981 12:46:18 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:26.981 12:46:18 make -- common/autotest_common.sh@10 -- $ set +x 00:04:27.255 12:46:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:27.255 12:46:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:27.255 12:46:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:27.255 12:46:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.255 12:46:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:27.255 12:46:18 -- pm/common@44 -- $ pid=6059 00:04:27.255 12:46:18 -- pm/common@50 -- $ kill -TERM 6059 00:04:27.255 12:46:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.255 12:46:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:27.255 12:46:18 -- pm/common@44 -- $ pid=6061 00:04:27.255 12:46:18 -- pm/common@50 -- $ kill -TERM 6061 00:04:27.255 12:46:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.255 12:46:18 -- nvmf/common.sh@7 -- # uname -s 00:04:27.255 12:46:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.255 12:46:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.255 12:46:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.255 12:46:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.255 12:46:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.255 12:46:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.255 12:46:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.255 12:46:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.255 12:46:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.255 12:46:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.255 12:46:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1f8a89f1-be0b-44bd-ab10-6c9c6254bb74 00:04:27.255 12:46:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=1f8a89f1-be0b-44bd-ab10-6c9c6254bb74 00:04:27.255 12:46:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.255 12:46:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.255 12:46:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.255 12:46:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.255 12:46:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.255 12:46:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.255 12:46:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.255 12:46:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.255 12:46:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.255 12:46:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.255 12:46:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.255 12:46:18 -- paths/export.sh@5 -- # export PATH 00:04:27.255 12:46:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.255 12:46:18 -- nvmf/common.sh@47 -- # : 0 00:04:27.255 12:46:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:27.255 12:46:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:27.255 12:46:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.255 12:46:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.255 12:46:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.255 12:46:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:27.255 12:46:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:27.255 12:46:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:27.255 12:46:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:27.255 12:46:18 -- spdk/autotest.sh@32 -- # uname -s 00:04:27.255 12:46:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:27.255 12:46:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:27.255 12:46:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.255 12:46:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:27.255 12:46:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.255 12:46:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:27.255 12:46:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:27.255 12:46:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:27.255 12:46:18 -- spdk/autotest.sh@48 -- # udevadm_pid=65974 00:04:27.255 12:46:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:27.255 12:46:18 -- pm/common@17 -- # local monitor 00:04:27.255 12:46:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.255 12:46:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.255 12:46:18 -- pm/common@25 -- # sleep 1 00:04:27.255 12:46:18 -- pm/common@21 -- # date +%s 00:04:27.255 12:46:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:27.255 12:46:18 -- pm/common@21 -- # date +%s 00:04:27.255 12:46:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1723380378 00:04:27.255 12:46:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1723380378 00:04:27.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1723380378_collect-cpu-load.pm.log 00:04:27.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1723380378_collect-vmstat.pm.log 00:04:28.192 12:46:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:28.192 12:46:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:28.192 12:46:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:28.192 12:46:19 -- common/autotest_common.sh@10 -- # set +x 00:04:28.452 12:46:19 -- spdk/autotest.sh@59 -- # create_test_list 00:04:28.452 12:46:19 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:28.452 12:46:19 -- common/autotest_common.sh@10 -- # set +x 00:04:28.452 12:46:19 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:28.452 12:46:19 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:28.452 12:46:19 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:28.452 12:46:19 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:28.452 12:46:19 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:28.452 12:46:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:28.452 12:46:19 -- common/autotest_common.sh@1451 -- # uname 00:04:28.452 12:46:19 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:28.452 12:46:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:28.452 12:46:19 -- common/autotest_common.sh@1471 -- # uname 00:04:28.452 12:46:19 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:28.452 12:46:19 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:28.452 12:46:19 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:28.452 12:46:19 -- spdk/autotest.sh@72 -- # hash lcov 00:04:28.452 12:46:19 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:28.452 12:46:19 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:28.452 --rc lcov_branch_coverage=1 00:04:28.452 --rc lcov_function_coverage=1 00:04:28.452 --rc genhtml_branch_coverage=1 00:04:28.452 --rc genhtml_function_coverage=1 00:04:28.452 --rc genhtml_legend=1 00:04:28.452 --rc geninfo_all_blocks=1 00:04:28.452 ' 00:04:28.452 12:46:19 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:28.452 --rc lcov_branch_coverage=1 00:04:28.452 --rc lcov_function_coverage=1 00:04:28.452 --rc genhtml_branch_coverage=1 00:04:28.452 --rc genhtml_function_coverage=1 00:04:28.452 --rc genhtml_legend=1 00:04:28.452 --rc geninfo_all_blocks=1 00:04:28.452 ' 00:04:28.452 12:46:19 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:28.452 --rc lcov_branch_coverage=1 00:04:28.452 --rc lcov_function_coverage=1 00:04:28.452 --rc genhtml_branch_coverage=1 00:04:28.452 --rc genhtml_function_coverage=1 00:04:28.452 --rc genhtml_legend=1 00:04:28.452 --rc geninfo_all_blocks=1 00:04:28.452 --no-external' 00:04:28.452 12:46:19 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:28.452 --rc lcov_branch_coverage=1 00:04:28.452 --rc lcov_function_coverage=1 00:04:28.452 --rc genhtml_branch_coverage=1 00:04:28.452 --rc genhtml_function_coverage=1 00:04:28.452 --rc genhtml_legend=1 00:04:28.452 --rc geninfo_all_blocks=1 00:04:28.452 --no-external' 00:04:28.452 12:46:19 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:28.452 lcov: LCOV version 1.15 00:04:28.452 12:46:19 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:43.337 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:43.337 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:55.544 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:55.544 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:55.545 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:55.545 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:55.546 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:55.546 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:57.447 12:46:48 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:57.447 12:46:48 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:57.447 12:46:48 -- common/autotest_common.sh@10 -- # set +x 00:04:57.447 12:46:48 -- spdk/autotest.sh@91 -- # rm -f 00:04:57.447 12:46:48 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.580 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:58.580 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:58.580 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:58.580 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:58.580 12:46:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:58.580 12:46:50 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:58.580 12:46:50 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:58.580 12:46:50 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:58.580 12:46:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.580 12:46:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:58.580 12:46:50 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:58.580 12:46:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.580 12:46:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.580 12:46:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.580 12:46:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:58.580 12:46:50 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:58.580 12:46:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:58.580 12:46:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.580 12:46:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.580 12:46:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:04:58.580 12:46:50 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:04:58.580 12:46:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:58.580 12:46:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.580 12:46:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.581 12:46:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:04:58.581 12:46:50 -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:04:58.581 12:46:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:58.581 12:46:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.581 12:46:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.581 12:46:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:04:58.581 12:46:50 -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:04:58.581 12:46:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:58.581 12:46:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.581 12:46:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.581 12:46:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:04:58.581 12:46:50 -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:04:58.581 12:46:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:58.581 12:46:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.581 12:46:50 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:58.581 12:46:50 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:04:58.581 12:46:50 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:04:58.581 12:46:50 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:58.581 12:46:50 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:58.581 12:46:50 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:58.581 12:46:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.581 12:46:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:58.581 12:46:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:58.581 12:46:50 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:58.581 12:46:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:58.581 No valid GPT data, bailing 00:04:58.581 12:46:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:58.581 12:46:50 -- scripts/common.sh@391 -- # pt= 00:04:58.581 12:46:50 -- scripts/common.sh@392 -- # return 1 00:04:58.581 12:46:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:58.581 1+0 records in 00:04:58.581 1+0 records out 00:04:58.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133526 s, 78.5 MB/s 00:04:58.581 12:46:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.581 12:46:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:58.581 12:46:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:58.581 12:46:50 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:58.581 12:46:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:58.840 No valid GPT data, bailing 00:04:58.840 12:46:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:58.840 12:46:50 -- scripts/common.sh@391 -- # pt= 00:04:58.840 12:46:50 -- scripts/common.sh@392 -- # return 1 00:04:58.840 12:46:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:58.840 1+0 records in 00:04:58.840 1+0 records out 00:04:58.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00334465 s, 314 MB/s 00:04:58.840 12:46:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.840 12:46:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:58.840 12:46:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:58.840 12:46:50 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:58.840 12:46:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:58.840 No valid GPT data, bailing 00:04:58.840 12:46:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:58.840 12:46:50 -- scripts/common.sh@391 -- # pt= 00:04:58.840 12:46:50 -- scripts/common.sh@392 -- # return 1 00:04:58.840 12:46:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:58.840 1+0 records in 00:04:58.840 1+0 records out 00:04:58.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00438138 s, 239 MB/s 00:04:58.840 12:46:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.840 12:46:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:58.840 12:46:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:58.840 12:46:50 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:58.840 12:46:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:58.840 No valid GPT data, bailing 00:04:58.840 12:46:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:58.840 12:46:50 -- scripts/common.sh@391 -- # pt= 00:04:58.840 12:46:50 -- scripts/common.sh@392 -- # return 1 00:04:58.840 12:46:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:58.840 1+0 records in 00:04:58.840 1+0 records out 00:04:58.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446722 s, 235 MB/s 00:04:58.840 12:46:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:58.840 12:46:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:58.840 12:46:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:58.840 12:46:50 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:58.840 12:46:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:58.840 No valid GPT data, bailing 00:04:58.840 12:46:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:59.098 12:46:50 -- scripts/common.sh@391 -- # pt= 00:04:59.098 12:46:50 -- scripts/common.sh@392 -- # return 1 00:04:59.098 12:46:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:59.098 1+0 records in 00:04:59.098 1+0 records out 00:04:59.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00313762 s, 334 MB/s 00:04:59.098 12:46:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:59.098 12:46:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:59.098 12:46:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:59.098 12:46:50 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:59.098 12:46:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:59.098 No valid GPT data, bailing 00:04:59.098 12:46:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:59.098 12:46:50 -- scripts/common.sh@391 -- # pt= 00:04:59.098 12:46:50 -- scripts/common.sh@392 -- # return 1 00:04:59.098 12:46:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:59.098 1+0 records in 00:04:59.098 1+0 records out 00:04:59.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00547778 s, 191 MB/s 00:04:59.098 12:46:50 -- spdk/autotest.sh@118 -- # sync 00:04:59.098 12:46:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:59.098 12:46:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:59.098 12:46:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:01.002 12:46:52 -- spdk/autotest.sh@124 -- # uname -s 00:05:01.002 12:46:52 -- spdk/autotest.sh@124 -- # [[ Linux == Linux ]] 00:05:01.002 12:46:52 -- spdk/autotest.sh@124 -- # [[ 0 -eq 1 ]] 00:05:01.002 12:46:52 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.826 Hugepages 00:05:01.826 node hugesize free / total 00:05:01.826 node0 1048576kB 0 / 0 00:05:01.826 node0 2048kB 0 / 0 00:05:01.826 00:05:01.826 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.826 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.085 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.085 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:02.085 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:02.343 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:02.344 12:46:53 -- spdk/autotest.sh@130 -- # uname -s 00:05:02.344 12:46:53 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:02.344 12:46:53 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:02.344 12:46:53 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.479 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.479 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.479 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.479 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.479 12:46:54 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:04.414 12:46:55 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:04.414 12:46:55 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:04.414 12:46:55 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.414 12:46:55 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:04.414 12:46:55 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:04.414 12:46:55 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:04.414 12:46:55 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.414 12:46:55 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.414 12:46:55 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:04.414 12:46:55 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:05:04.414 12:46:55 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:04.414 12:46:55 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.981 Waiting for block devices as requested 00:05:05.240 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.240 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.240 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.507 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.794 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:10.794 12:47:01 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:10.794 12:47:01 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:10.794 12:47:01 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:10.794 12:47:01 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:10.794 12:47:01 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:10.794 12:47:01 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:10.795 12:47:01 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:10.795 12:47:01 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:10.795 12:47:01 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:10.795 12:47:01 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:10.795 12:47:01 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:10.795 12:47:01 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:10.795 12:47:01 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:10.795 12:47:01 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:10.795 12:47:01 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:10.795 12:47:01 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:10.795 12:47:01 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:10.795 12:47:01 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:10.795 12:47:01 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:10.795 12:47:01 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:10.795 12:47:01 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:10.795 12:47:01 -- common/autotest_common.sh@1553 -- # continue 00:05:10.795 12:47:01 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:10.795 12:47:01 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:10.795 12:47:01 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:10.795 12:47:01 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:10.795 12:47:01 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:10.795 12:47:01 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:10.795 12:47:01 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:10.795 12:47:01 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:10.795 12:47:01 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:10.795 12:47:01 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:10.795 12:47:01 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:10.795 12:47:01 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:10.795 12:47:01 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:10.795 12:47:01 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:10.795 12:47:01 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:10.795 12:47:01 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:10.795 12:47:01 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:10.795 12:47:01 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:10.795 12:47:01 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:10.795 12:47:02 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1553 -- # continue 00:05:10.795 12:47:02 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:10.795 12:47:02 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:10.795 12:47:02 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:10.795 12:47:02 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:05:10.795 12:47:02 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:10.795 12:47:02 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:10.795 12:47:02 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:10.795 12:47:02 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:10.795 12:47:02 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1553 -- # continue 00:05:10.795 12:47:02 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:10.795 12:47:02 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:10.795 12:47:02 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:05:10.795 12:47:02 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:10.795 12:47:02 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:10.795 12:47:02 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:10.795 12:47:02 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:05:10.795 12:47:02 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:05:10.795 12:47:02 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:05:10.795 12:47:02 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:10.795 12:47:02 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:10.795 12:47:02 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:10.795 12:47:02 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:10.795 12:47:02 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:10.795 12:47:02 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:10.795 12:47:02 -- common/autotest_common.sh@1553 -- # continue 00:05:10.795 12:47:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:10.795 12:47:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.795 12:47:02 -- common/autotest_common.sh@10 -- # set +x 00:05:10.795 12:47:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:10.795 12:47:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:10.795 12:47:02 -- common/autotest_common.sh@10 -- # set +x 00:05:10.795 12:47:02 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.620 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.878 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.878 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.878 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.878 12:47:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:11.878 12:47:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.878 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:05:11.878 12:47:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:11.878 12:47:03 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:11.878 12:47:03 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.878 12:47:03 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:11.878 12:47:03 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:11.878 12:47:03 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:11.878 12:47:03 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:11.878 12:47:03 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:11.878 12:47:03 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.878 12:47:03 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:11.878 12:47:03 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:11.878 12:47:03 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:05:11.878 12:47:03 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:11.878 12:47:03 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:11.878 12:47:03 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:11.878 12:47:03 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:11.878 12:47:03 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.878 12:47:03 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:11.878 12:47:03 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:11.878 12:47:03 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:11.878 12:47:03 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.878 12:47:03 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:11.878 12:47:03 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:11.878 12:47:03 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:11.878 12:47:03 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.878 12:47:03 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:12.136 12:47:03 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:12.136 12:47:03 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:12.136 12:47:03 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.136 12:47:03 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:12.136 12:47:03 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:12.136 12:47:03 -- common/autotest_common.sh@1589 -- # return 0 00:05:12.136 12:47:03 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:12.136 12:47:03 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:12.136 12:47:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:12.136 12:47:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:12.136 12:47:03 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:12.136 12:47:03 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:12.136 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:05:12.136 12:47:03 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:12.136 12:47:03 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.136 12:47:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.136 12:47:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.136 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:05:12.136 ************************************ 00:05:12.136 START TEST env 00:05:12.136 ************************************ 00:05:12.136 12:47:03 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.136 * Looking for test storage... 00:05:12.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:12.136 12:47:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:12.136 12:47:03 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.136 12:47:03 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.136 12:47:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.136 ************************************ 00:05:12.136 START TEST env_memory 00:05:12.136 ************************************ 00:05:12.136 12:47:03 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:12.136 00:05:12.136 00:05:12.136 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.136 http://cunit.sourceforge.net/ 00:05:12.136 00:05:12.136 00:05:12.136 Suite: memory 00:05:12.136 Test: alloc and free memory map ...[2024-08-11 12:47:03.670151] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:12.136 passed 00:05:12.136 Test: mem map translation ...[2024-08-11 12:47:03.730531] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:12.136 [2024-08-11 12:47:03.730636] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:12.136 [2024-08-11 12:47:03.730741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:12.136 [2024-08-11 12:47:03.730775] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:12.395 passed 00:05:12.395 Test: mem map registration ...[2024-08-11 12:47:03.829143] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:12.395 [2024-08-11 12:47:03.829215] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:12.395 passed 00:05:12.395 Test: mem map adjacent registrations ...passed 00:05:12.395 00:05:12.395 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.395 suites 1 1 n/a 0 0 00:05:12.395 tests 4 4 4 0 0 00:05:12.395 asserts 152 152 152 0 n/a 00:05:12.395 00:05:12.395 Elapsed time = 0.347 seconds 00:05:12.395 00:05:12.395 real 0m0.381s 00:05:12.395 user 0m0.352s 00:05:12.395 sys 0m0.024s 00:05:12.395 12:47:03 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.395 12:47:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:12.395 ************************************ 00:05:12.395 END TEST env_memory 00:05:12.395 ************************************ 00:05:12.655 12:47:04 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:12.655 12:47:04 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.655 12:47:04 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.655 12:47:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.655 ************************************ 00:05:12.655 START TEST env_vtophys 00:05:12.655 ************************************ 00:05:12.655 12:47:04 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:12.655 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:12.655 EAL: lib.eal log level changed from notice to debug 00:05:12.655 EAL: Detected lcore 0 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 1 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 2 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 3 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 4 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 5 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 6 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 7 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 8 as core 0 on socket 0 00:05:12.655 EAL: Detected lcore 9 as core 0 on socket 0 00:05:12.655 EAL: Maximum logical cores by configuration: 128 00:05:12.655 EAL: Detected CPU lcores: 10 00:05:12.655 EAL: Detected NUMA nodes: 1 00:05:12.655 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:12.655 EAL: Detected shared linkage of DPDK 00:05:12.655 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:12.655 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:12.655 EAL: Registered [vdev] bus. 00:05:12.655 EAL: bus.vdev log level changed from disabled to notice 00:05:12.655 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:12.655 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:12.655 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:12.655 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:12.655 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:12.655 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:12.655 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:12.655 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:12.655 EAL: No shared files mode enabled, IPC will be disabled 00:05:12.655 EAL: No shared files mode enabled, IPC is disabled 00:05:12.655 EAL: Selected IOVA mode 'PA' 00:05:12.655 EAL: Probing VFIO support... 00:05:12.655 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:12.655 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:12.655 EAL: Ask a virtual area of 0x2e000 bytes 00:05:12.655 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:12.655 EAL: Setting up physically contiguous memory... 00:05:12.655 EAL: Setting maximum number of open files to 524288 00:05:12.655 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:12.655 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:12.655 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.655 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:12.655 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.655 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.655 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:12.655 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:12.655 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.655 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:12.655 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.655 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.655 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:12.655 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:12.655 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.655 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:12.655 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.655 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.655 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:12.655 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:12.655 EAL: Ask a virtual area of 0x61000 bytes 00:05:12.655 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:12.655 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:12.655 EAL: Ask a virtual area of 0x400000000 bytes 00:05:12.655 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:12.655 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:12.655 EAL: Hugepages will be freed exactly as allocated. 00:05:12.655 EAL: No shared files mode enabled, IPC is disabled 00:05:12.656 EAL: No shared files mode enabled, IPC is disabled 00:05:12.656 EAL: TSC frequency is ~2200000 KHz 00:05:12.656 EAL: Main lcore 0 is ready (tid=7f5944a7ea40;cpuset=[0]) 00:05:12.656 EAL: Trying to obtain current memory policy. 00:05:12.656 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.656 EAL: Restoring previous memory policy: 0 00:05:12.656 EAL: request: mp_malloc_sync 00:05:12.656 EAL: No shared files mode enabled, IPC is disabled 00:05:12.656 EAL: Heap on socket 0 was expanded by 2MB 00:05:12.656 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:12.656 EAL: No shared files mode enabled, IPC is disabled 00:05:12.656 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:12.656 EAL: Mem event callback 'spdk:(nil)' registered 00:05:12.656 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:12.656 00:05:12.656 00:05:12.656 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.656 http://cunit.sourceforge.net/ 00:05:12.656 00:05:12.656 00:05:12.656 Suite: components_suite 00:05:13.223 Test: vtophys_malloc_test ...passed 00:05:13.223 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.223 EAL: Restoring previous memory policy: 4 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.223 EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.223 EAL: Restoring previous memory policy: 4 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.223 EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.223 EAL: Restoring previous memory policy: 4 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.223 EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.223 EAL: Restoring previous memory policy: 4 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was expanded by 18MB 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was shrunk by 18MB 00:05:13.223 EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.223 EAL: Restoring previous memory policy: 4 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was expanded by 34MB 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was shrunk by 34MB 00:05:13.223 EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.223 EAL: Restoring previous memory policy: 4 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was expanded by 66MB 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was shrunk by 66MB 00:05:13.223 EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.223 EAL: Restoring previous memory policy: 4 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was expanded by 130MB 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was shrunk by 130MB 00:05:13.223 EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.223 EAL: Restoring previous memory policy: 4 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was expanded by 258MB 00:05:13.223 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.223 EAL: request: mp_malloc_sync 00:05:13.223 EAL: No shared files mode enabled, IPC is disabled 00:05:13.223 EAL: Heap on socket 0 was shrunk by 258MB 00:05:13.223 EAL: Trying to obtain current memory policy. 00:05:13.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.482 EAL: Restoring previous memory policy: 4 00:05:13.482 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.482 EAL: request: mp_malloc_sync 00:05:13.482 EAL: No shared files mode enabled, IPC is disabled 00:05:13.482 EAL: Heap on socket 0 was expanded by 514MB 00:05:13.482 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.482 EAL: request: mp_malloc_sync 00:05:13.482 EAL: No shared files mode enabled, IPC is disabled 00:05:13.482 EAL: Heap on socket 0 was shrunk by 514MB 00:05:13.482 EAL: Trying to obtain current memory policy. 00:05:13.482 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.482 EAL: Restoring previous memory policy: 4 00:05:13.482 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.482 EAL: request: mp_malloc_sync 00:05:13.482 EAL: No shared files mode enabled, IPC is disabled 00:05:13.482 EAL: Heap on socket 0 was expanded by 1026MB 00:05:13.740 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.740 passed 00:05:13.740 00:05:13.740 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.740 suites 1 1 n/a 0 0 00:05:13.740 tests 2 2 2 0 0 00:05:13.740 asserts 5379 5379 5379 0 n/a 00:05:13.740 00:05:13.740 Elapsed time = 1.046 secondsEAL: request: mp_malloc_sync 00:05:13.740 EAL: No shared files mode enabled, IPC is disabled 00:05:13.740 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:13.740 00:05:13.740 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.740 EAL: request: mp_malloc_sync 00:05:13.740 EAL: No shared files mode enabled, IPC is disabled 00:05:13.740 EAL: Heap on socket 0 was shrunk by 2MB 00:05:13.740 EAL: No shared files mode enabled, IPC is disabled 00:05:13.740 EAL: No shared files mode enabled, IPC is disabled 00:05:13.741 EAL: No shared files mode enabled, IPC is disabled 00:05:13.741 00:05:13.741 real 0m1.284s 00:05:13.741 user 0m0.568s 00:05:13.741 sys 0m0.582s 00:05:13.741 12:47:05 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.741 12:47:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:13.741 ************************************ 00:05:13.741 END TEST env_vtophys 00:05:13.741 ************************************ 00:05:13.999 12:47:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:13.999 12:47:05 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.999 12:47:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.999 12:47:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.999 ************************************ 00:05:13.999 START TEST env_pci 00:05:14.000 ************************************ 00:05:14.000 12:47:05 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:14.000 00:05:14.000 00:05:14.000 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.000 http://cunit.sourceforge.net/ 00:05:14.000 00:05:14.000 00:05:14.000 Suite: pci 00:05:14.000 Test: pci_hook ...[2024-08-11 12:47:05.390801] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68767 has claimed it 00:05:14.000 passed 00:05:14.000 00:05:14.000 EAL: Cannot find device (10000:00:01.0) 00:05:14.000 EAL: Failed to attach device on primary process 00:05:14.000 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.000 suites 1 1 n/a 0 0 00:05:14.000 tests 1 1 1 0 0 00:05:14.000 asserts 25 25 25 0 n/a 00:05:14.000 00:05:14.000 Elapsed time = 0.005 seconds 00:05:14.000 00:05:14.000 real 0m0.067s 00:05:14.000 user 0m0.029s 00:05:14.000 sys 0m0.037s 00:05:14.000 12:47:05 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.000 12:47:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:14.000 ************************************ 00:05:14.000 END TEST env_pci 00:05:14.000 ************************************ 00:05:14.000 12:47:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:14.000 12:47:05 env -- env/env.sh@15 -- # uname 00:05:14.000 12:47:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:14.000 12:47:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:14.000 12:47:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.000 12:47:05 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:14.000 12:47:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.000 12:47:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.000 ************************************ 00:05:14.000 START TEST env_dpdk_post_init 00:05:14.000 ************************************ 00:05:14.000 12:47:05 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.000 EAL: Detected CPU lcores: 10 00:05:14.000 EAL: Detected NUMA nodes: 1 00:05:14.000 EAL: Detected shared linkage of DPDK 00:05:14.000 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.000 EAL: Selected IOVA mode 'PA' 00:05:14.259 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.259 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:14.259 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:14.259 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:14.259 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:14.259 Starting DPDK initialization... 00:05:14.259 Starting SPDK post initialization... 00:05:14.259 SPDK NVMe probe 00:05:14.259 Attaching to 0000:00:10.0 00:05:14.259 Attaching to 0000:00:11.0 00:05:14.259 Attaching to 0000:00:12.0 00:05:14.259 Attaching to 0000:00:13.0 00:05:14.259 Attached to 0000:00:11.0 00:05:14.259 Attached to 0000:00:13.0 00:05:14.259 Attached to 0000:00:10.0 00:05:14.259 Attached to 0000:00:12.0 00:05:14.259 Cleaning up... 00:05:14.259 00:05:14.259 real 0m0.244s 00:05:14.259 user 0m0.074s 00:05:14.259 sys 0m0.072s 00:05:14.259 12:47:05 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.259 12:47:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.259 ************************************ 00:05:14.259 END TEST env_dpdk_post_init 00:05:14.259 ************************************ 00:05:14.259 12:47:05 env -- env/env.sh@26 -- # uname 00:05:14.259 12:47:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:14.259 12:47:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.259 12:47:05 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.259 12:47:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.259 12:47:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.259 ************************************ 00:05:14.259 START TEST env_mem_callbacks 00:05:14.259 ************************************ 00:05:14.259 12:47:05 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:14.259 EAL: Detected CPU lcores: 10 00:05:14.259 EAL: Detected NUMA nodes: 1 00:05:14.259 EAL: Detected shared linkage of DPDK 00:05:14.259 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.259 EAL: Selected IOVA mode 'PA' 00:05:14.518 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.518 00:05:14.518 00:05:14.518 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.518 http://cunit.sourceforge.net/ 00:05:14.518 00:05:14.518 00:05:14.518 Suite: memory 00:05:14.518 Test: test ... 00:05:14.518 register 0x200000200000 2097152 00:05:14.518 malloc 3145728 00:05:14.518 register 0x200000400000 4194304 00:05:14.518 buf 0x200000500000 len 3145728 PASSED 00:05:14.518 malloc 64 00:05:14.518 buf 0x2000004fff40 len 64 PASSED 00:05:14.518 malloc 4194304 00:05:14.518 register 0x200000800000 6291456 00:05:14.518 buf 0x200000a00000 len 4194304 PASSED 00:05:14.518 free 0x200000500000 3145728 00:05:14.518 free 0x2000004fff40 64 00:05:14.518 unregister 0x200000400000 4194304 PASSED 00:05:14.518 free 0x200000a00000 4194304 00:05:14.518 unregister 0x200000800000 6291456 PASSED 00:05:14.518 malloc 8388608 00:05:14.518 register 0x200000400000 10485760 00:05:14.518 buf 0x200000600000 len 8388608 PASSED 00:05:14.518 free 0x200000600000 8388608 00:05:14.518 unregister 0x200000400000 10485760 PASSED 00:05:14.518 passed 00:05:14.518 00:05:14.518 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.518 suites 1 1 n/a 0 0 00:05:14.518 tests 1 1 1 0 0 00:05:14.518 asserts 15 15 15 0 n/a 00:05:14.518 00:05:14.518 Elapsed time = 0.010 seconds 00:05:14.518 00:05:14.518 real 0m0.177s 00:05:14.518 user 0m0.031s 00:05:14.518 sys 0m0.044s 00:05:14.518 12:47:05 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.518 ************************************ 00:05:14.518 END TEST env_mem_callbacks 00:05:14.518 12:47:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:14.518 ************************************ 00:05:14.518 00:05:14.518 real 0m2.514s 00:05:14.518 user 0m1.177s 00:05:14.518 sys 0m0.978s 00:05:14.518 12:47:06 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.518 12:47:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.518 ************************************ 00:05:14.518 END TEST env 00:05:14.518 ************************************ 00:05:14.518 12:47:06 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:14.519 12:47:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.519 12:47:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.519 12:47:06 -- common/autotest_common.sh@10 -- # set +x 00:05:14.519 ************************************ 00:05:14.519 START TEST rpc 00:05:14.519 ************************************ 00:05:14.519 12:47:06 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:14.777 * Looking for test storage... 00:05:14.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.777 12:47:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68881 00:05:14.777 12:47:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:14.777 12:47:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.777 12:47:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68881 00:05:14.777 12:47:06 rpc -- common/autotest_common.sh@827 -- # '[' -z 68881 ']' 00:05:14.777 12:47:06 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.777 12:47:06 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:14.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.777 12:47:06 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.777 12:47:06 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:14.777 12:47:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.777 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:14.777 [2024-08-11 12:47:06.275360] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:14.777 [2024-08-11 12:47:06.275568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68881 ] 00:05:15.036 [2024-08-11 12:47:06.424517] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.036 [2024-08-11 12:47:06.463182] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:15.036 [2024-08-11 12:47:06.463276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68881' to capture a snapshot of events at runtime. 00:05:15.036 [2024-08-11 12:47:06.463291] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.036 [2024-08-11 12:47:06.463314] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.036 [2024-08-11 12:47:06.463325] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68881 for offline analysis/debug. 00:05:15.036 [2024-08-11 12:47:06.463367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.972 12:47:07 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:15.972 12:47:07 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:15.972 12:47:07 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.972 12:47:07 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.972 12:47:07 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:15.972 12:47:07 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:15.972 12:47:07 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.972 12:47:07 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.972 12:47:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.972 ************************************ 00:05:15.972 START TEST rpc_integrity 00:05:15.972 ************************************ 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.972 { 00:05:15.972 "name": "Malloc0", 00:05:15.972 "aliases": [ 00:05:15.972 "1f5446d2-44c3-45d0-b418-99b083c600a0" 00:05:15.972 ], 00:05:15.972 "product_name": "Malloc disk", 00:05:15.972 "block_size": 512, 00:05:15.972 "num_blocks": 16384, 00:05:15.972 "uuid": "1f5446d2-44c3-45d0-b418-99b083c600a0", 00:05:15.972 "assigned_rate_limits": { 00:05:15.972 "rw_ios_per_sec": 0, 00:05:15.972 "rw_mbytes_per_sec": 0, 00:05:15.972 "r_mbytes_per_sec": 0, 00:05:15.972 "w_mbytes_per_sec": 0 00:05:15.972 }, 00:05:15.972 "claimed": false, 00:05:15.972 "zoned": false, 00:05:15.972 "supported_io_types": { 00:05:15.972 "read": true, 00:05:15.972 "write": true, 00:05:15.972 "unmap": true, 00:05:15.972 "flush": true, 00:05:15.972 "reset": true, 00:05:15.972 "nvme_admin": false, 00:05:15.972 "nvme_io": false, 00:05:15.972 "nvme_io_md": false, 00:05:15.972 "write_zeroes": true, 00:05:15.972 "zcopy": true, 00:05:15.972 "get_zone_info": false, 00:05:15.972 "zone_management": false, 00:05:15.972 "zone_append": false, 00:05:15.972 "compare": false, 00:05:15.972 "compare_and_write": false, 00:05:15.972 "abort": true, 00:05:15.972 "seek_hole": false, 00:05:15.972 "seek_data": false, 00:05:15.972 "copy": true, 00:05:15.972 "nvme_iov_md": false 00:05:15.972 }, 00:05:15.972 "memory_domains": [ 00:05:15.972 { 00:05:15.972 "dma_device_id": "system", 00:05:15.972 "dma_device_type": 1 00:05:15.972 }, 00:05:15.972 { 00:05:15.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.972 "dma_device_type": 2 00:05:15.972 } 00:05:15.972 ], 00:05:15.972 "driver_specific": {} 00:05:15.972 } 00:05:15.972 ]' 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.972 [2024-08-11 12:47:07.433529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:15.972 [2024-08-11 12:47:07.433635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.972 [2024-08-11 12:47:07.433668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:15.972 [2024-08-11 12:47:07.433688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.972 [2024-08-11 12:47:07.436597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.972 [2024-08-11 12:47:07.436671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.972 Passthru0 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.972 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:15.972 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.972 { 00:05:15.972 "name": "Malloc0", 00:05:15.972 "aliases": [ 00:05:15.972 "1f5446d2-44c3-45d0-b418-99b083c600a0" 00:05:15.972 ], 00:05:15.972 "product_name": "Malloc disk", 00:05:15.972 "block_size": 512, 00:05:15.972 "num_blocks": 16384, 00:05:15.973 "uuid": "1f5446d2-44c3-45d0-b418-99b083c600a0", 00:05:15.973 "assigned_rate_limits": { 00:05:15.973 "rw_ios_per_sec": 0, 00:05:15.973 "rw_mbytes_per_sec": 0, 00:05:15.973 "r_mbytes_per_sec": 0, 00:05:15.973 "w_mbytes_per_sec": 0 00:05:15.973 }, 00:05:15.973 "claimed": true, 00:05:15.973 "claim_type": "exclusive_write", 00:05:15.973 "zoned": false, 00:05:15.973 "supported_io_types": { 00:05:15.973 "read": true, 00:05:15.973 "write": true, 00:05:15.973 "unmap": true, 00:05:15.973 "flush": true, 00:05:15.973 "reset": true, 00:05:15.973 "nvme_admin": false, 00:05:15.973 "nvme_io": false, 00:05:15.973 "nvme_io_md": false, 00:05:15.973 "write_zeroes": true, 00:05:15.973 "zcopy": true, 00:05:15.973 "get_zone_info": false, 00:05:15.973 "zone_management": false, 00:05:15.973 "zone_append": false, 00:05:15.973 "compare": false, 00:05:15.973 "compare_and_write": false, 00:05:15.973 "abort": true, 00:05:15.973 "seek_hole": false, 00:05:15.973 "seek_data": false, 00:05:15.973 "copy": true, 00:05:15.973 "nvme_iov_md": false 00:05:15.973 }, 00:05:15.973 "memory_domains": [ 00:05:15.973 { 00:05:15.973 "dma_device_id": "system", 00:05:15.973 "dma_device_type": 1 00:05:15.973 }, 00:05:15.973 { 00:05:15.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.973 "dma_device_type": 2 00:05:15.973 } 00:05:15.973 ], 00:05:15.973 "driver_specific": {} 00:05:15.973 }, 00:05:15.973 { 00:05:15.973 "name": "Passthru0", 00:05:15.973 "aliases": [ 00:05:15.973 "d388e149-fbca-5701-be69-37fd4bd0ee4f" 00:05:15.973 ], 00:05:15.973 "product_name": "passthru", 00:05:15.973 "block_size": 512, 00:05:15.973 "num_blocks": 16384, 00:05:15.973 "uuid": "d388e149-fbca-5701-be69-37fd4bd0ee4f", 00:05:15.973 "assigned_rate_limits": { 00:05:15.973 "rw_ios_per_sec": 0, 00:05:15.973 "rw_mbytes_per_sec": 0, 00:05:15.973 "r_mbytes_per_sec": 0, 00:05:15.973 "w_mbytes_per_sec": 0 00:05:15.973 }, 00:05:15.973 "claimed": false, 00:05:15.973 "zoned": false, 00:05:15.973 "supported_io_types": { 00:05:15.973 "read": true, 00:05:15.973 "write": true, 00:05:15.973 "unmap": true, 00:05:15.973 "flush": true, 00:05:15.973 "reset": true, 00:05:15.973 "nvme_admin": false, 00:05:15.973 "nvme_io": false, 00:05:15.973 "nvme_io_md": false, 00:05:15.973 "write_zeroes": true, 00:05:15.973 "zcopy": true, 00:05:15.973 "get_zone_info": false, 00:05:15.973 "zone_management": false, 00:05:15.973 "zone_append": false, 00:05:15.973 "compare": false, 00:05:15.973 "compare_and_write": false, 00:05:15.973 "abort": true, 00:05:15.973 "seek_hole": false, 00:05:15.973 "seek_data": false, 00:05:15.973 "copy": true, 00:05:15.973 "nvme_iov_md": false 00:05:15.973 }, 00:05:15.973 "memory_domains": [ 00:05:15.973 { 00:05:15.973 "dma_device_id": "system", 00:05:15.973 "dma_device_type": 1 00:05:15.973 }, 00:05:15.973 { 00:05:15.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.973 "dma_device_type": 2 00:05:15.973 } 00:05:15.973 ], 00:05:15.973 "driver_specific": { 00:05:15.973 "passthru": { 00:05:15.973 "name": "Passthru0", 00:05:15.973 "base_bdev_name": "Malloc0" 00:05:15.973 } 00:05:15.973 } 00:05:15.973 } 00:05:15.973 ]' 00:05:15.973 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.973 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.973 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:15.973 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:15.973 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.973 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:15.973 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.973 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.231 12:47:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.231 00:05:16.231 real 0m0.331s 00:05:16.231 user 0m0.216s 00:05:16.231 sys 0m0.041s 00:05:16.231 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.231 ************************************ 00:05:16.231 END TEST rpc_integrity 00:05:16.231 12:47:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.231 ************************************ 00:05:16.231 12:47:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.231 12:47:07 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.232 12:47:07 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.232 12:47:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.232 ************************************ 00:05:16.232 START TEST rpc_plugins 00:05:16.232 ************************************ 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.232 { 00:05:16.232 "name": "Malloc1", 00:05:16.232 "aliases": [ 00:05:16.232 "49367047-a469-4ec7-85b3-626984db2435" 00:05:16.232 ], 00:05:16.232 "product_name": "Malloc disk", 00:05:16.232 "block_size": 4096, 00:05:16.232 "num_blocks": 256, 00:05:16.232 "uuid": "49367047-a469-4ec7-85b3-626984db2435", 00:05:16.232 "assigned_rate_limits": { 00:05:16.232 "rw_ios_per_sec": 0, 00:05:16.232 "rw_mbytes_per_sec": 0, 00:05:16.232 "r_mbytes_per_sec": 0, 00:05:16.232 "w_mbytes_per_sec": 0 00:05:16.232 }, 00:05:16.232 "claimed": false, 00:05:16.232 "zoned": false, 00:05:16.232 "supported_io_types": { 00:05:16.232 "read": true, 00:05:16.232 "write": true, 00:05:16.232 "unmap": true, 00:05:16.232 "flush": true, 00:05:16.232 "reset": true, 00:05:16.232 "nvme_admin": false, 00:05:16.232 "nvme_io": false, 00:05:16.232 "nvme_io_md": false, 00:05:16.232 "write_zeroes": true, 00:05:16.232 "zcopy": true, 00:05:16.232 "get_zone_info": false, 00:05:16.232 "zone_management": false, 00:05:16.232 "zone_append": false, 00:05:16.232 "compare": false, 00:05:16.232 "compare_and_write": false, 00:05:16.232 "abort": true, 00:05:16.232 "seek_hole": false, 00:05:16.232 "seek_data": false, 00:05:16.232 "copy": true, 00:05:16.232 "nvme_iov_md": false 00:05:16.232 }, 00:05:16.232 "memory_domains": [ 00:05:16.232 { 00:05:16.232 "dma_device_id": "system", 00:05:16.232 "dma_device_type": 1 00:05:16.232 }, 00:05:16.232 { 00:05:16.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.232 "dma_device_type": 2 00:05:16.232 } 00:05:16.232 ], 00:05:16.232 "driver_specific": {} 00:05:16.232 } 00:05:16.232 ]' 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:16.232 12:47:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:16.232 00:05:16.232 real 0m0.158s 00:05:16.232 user 0m0.105s 00:05:16.232 sys 0m0.020s 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.232 ************************************ 00:05:16.232 END TEST rpc_plugins 00:05:16.232 12:47:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.232 ************************************ 00:05:16.490 12:47:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:16.490 12:47:07 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.490 12:47:07 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.490 12:47:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.490 ************************************ 00:05:16.490 START TEST rpc_trace_cmd_test 00:05:16.490 ************************************ 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:16.490 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68881", 00:05:16.490 "tpoint_group_mask": "0x8", 00:05:16.490 "iscsi_conn": { 00:05:16.490 "mask": "0x2", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "scsi": { 00:05:16.490 "mask": "0x4", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "bdev": { 00:05:16.490 "mask": "0x8", 00:05:16.490 "tpoint_mask": "0xffffffffffffffff" 00:05:16.490 }, 00:05:16.490 "nvmf_rdma": { 00:05:16.490 "mask": "0x10", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "nvmf_tcp": { 00:05:16.490 "mask": "0x20", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "ftl": { 00:05:16.490 "mask": "0x40", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "blobfs": { 00:05:16.490 "mask": "0x80", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "dsa": { 00:05:16.490 "mask": "0x200", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "thread": { 00:05:16.490 "mask": "0x400", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "nvme_pcie": { 00:05:16.490 "mask": "0x800", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "iaa": { 00:05:16.490 "mask": "0x1000", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "nvme_tcp": { 00:05:16.490 "mask": "0x2000", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "bdev_nvme": { 00:05:16.490 "mask": "0x4000", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 }, 00:05:16.490 "sock": { 00:05:16.490 "mask": "0x8000", 00:05:16.490 "tpoint_mask": "0x0" 00:05:16.490 } 00:05:16.490 }' 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:16.490 12:47:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:16.491 12:47:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:16.491 12:47:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:16.749 12:47:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:16.749 12:47:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:16.749 12:47:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:16.749 00:05:16.749 real 0m0.262s 00:05:16.749 user 0m0.230s 00:05:16.749 sys 0m0.024s 00:05:16.749 12:47:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.749 12:47:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.749 ************************************ 00:05:16.749 END TEST rpc_trace_cmd_test 00:05:16.749 ************************************ 00:05:16.749 12:47:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:16.749 12:47:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:16.749 12:47:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:16.749 12:47:08 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.749 12:47:08 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.749 12:47:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.749 ************************************ 00:05:16.749 START TEST rpc_daemon_integrity 00:05:16.749 ************************************ 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.749 { 00:05:16.749 "name": "Malloc2", 00:05:16.749 "aliases": [ 00:05:16.749 "be11c525-7ec4-4f3e-b89a-f54705248d6d" 00:05:16.749 ], 00:05:16.749 "product_name": "Malloc disk", 00:05:16.749 "block_size": 512, 00:05:16.749 "num_blocks": 16384, 00:05:16.749 "uuid": "be11c525-7ec4-4f3e-b89a-f54705248d6d", 00:05:16.749 "assigned_rate_limits": { 00:05:16.749 "rw_ios_per_sec": 0, 00:05:16.749 "rw_mbytes_per_sec": 0, 00:05:16.749 "r_mbytes_per_sec": 0, 00:05:16.749 "w_mbytes_per_sec": 0 00:05:16.749 }, 00:05:16.749 "claimed": false, 00:05:16.749 "zoned": false, 00:05:16.749 "supported_io_types": { 00:05:16.749 "read": true, 00:05:16.749 "write": true, 00:05:16.749 "unmap": true, 00:05:16.749 "flush": true, 00:05:16.749 "reset": true, 00:05:16.749 "nvme_admin": false, 00:05:16.749 "nvme_io": false, 00:05:16.749 "nvme_io_md": false, 00:05:16.749 "write_zeroes": true, 00:05:16.749 "zcopy": true, 00:05:16.749 "get_zone_info": false, 00:05:16.749 "zone_management": false, 00:05:16.749 "zone_append": false, 00:05:16.749 "compare": false, 00:05:16.749 "compare_and_write": false, 00:05:16.749 "abort": true, 00:05:16.749 "seek_hole": false, 00:05:16.749 "seek_data": false, 00:05:16.749 "copy": true, 00:05:16.749 "nvme_iov_md": false 00:05:16.749 }, 00:05:16.749 "memory_domains": [ 00:05:16.749 { 00:05:16.749 "dma_device_id": "system", 00:05:16.749 "dma_device_type": 1 00:05:16.749 }, 00:05:16.749 { 00:05:16.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.749 "dma_device_type": 2 00:05:16.749 } 00:05:16.749 ], 00:05:16.749 "driver_specific": {} 00:05:16.749 } 00:05:16.749 ]' 00:05:16.749 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.008 [2024-08-11 12:47:08.362143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.008 [2024-08-11 12:47:08.362244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.008 [2024-08-11 12:47:08.362302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:17.008 [2024-08-11 12:47:08.362318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.008 [2024-08-11 12:47:08.365227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.008 [2024-08-11 12:47:08.365333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.008 Passthru0 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:17.008 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.008 { 00:05:17.008 "name": "Malloc2", 00:05:17.008 "aliases": [ 00:05:17.008 "be11c525-7ec4-4f3e-b89a-f54705248d6d" 00:05:17.008 ], 00:05:17.008 "product_name": "Malloc disk", 00:05:17.008 "block_size": 512, 00:05:17.008 "num_blocks": 16384, 00:05:17.008 "uuid": "be11c525-7ec4-4f3e-b89a-f54705248d6d", 00:05:17.008 "assigned_rate_limits": { 00:05:17.008 "rw_ios_per_sec": 0, 00:05:17.008 "rw_mbytes_per_sec": 0, 00:05:17.008 "r_mbytes_per_sec": 0, 00:05:17.008 "w_mbytes_per_sec": 0 00:05:17.008 }, 00:05:17.008 "claimed": true, 00:05:17.008 "claim_type": "exclusive_write", 00:05:17.008 "zoned": false, 00:05:17.008 "supported_io_types": { 00:05:17.008 "read": true, 00:05:17.008 "write": true, 00:05:17.008 "unmap": true, 00:05:17.008 "flush": true, 00:05:17.008 "reset": true, 00:05:17.008 "nvme_admin": false, 00:05:17.008 "nvme_io": false, 00:05:17.008 "nvme_io_md": false, 00:05:17.008 "write_zeroes": true, 00:05:17.008 "zcopy": true, 00:05:17.008 "get_zone_info": false, 00:05:17.008 "zone_management": false, 00:05:17.008 "zone_append": false, 00:05:17.008 "compare": false, 00:05:17.008 "compare_and_write": false, 00:05:17.008 "abort": true, 00:05:17.008 "seek_hole": false, 00:05:17.008 "seek_data": false, 00:05:17.008 "copy": true, 00:05:17.008 "nvme_iov_md": false 00:05:17.008 }, 00:05:17.008 "memory_domains": [ 00:05:17.008 { 00:05:17.008 "dma_device_id": "system", 00:05:17.008 "dma_device_type": 1 00:05:17.008 }, 00:05:17.008 { 00:05:17.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.008 "dma_device_type": 2 00:05:17.008 } 00:05:17.008 ], 00:05:17.008 "driver_specific": {} 00:05:17.008 }, 00:05:17.008 { 00:05:17.008 "name": "Passthru0", 00:05:17.008 "aliases": [ 00:05:17.008 "2e2fa0b7-e911-5d4a-982b-b57d81cc470d" 00:05:17.008 ], 00:05:17.008 "product_name": "passthru", 00:05:17.008 "block_size": 512, 00:05:17.008 "num_blocks": 16384, 00:05:17.008 "uuid": "2e2fa0b7-e911-5d4a-982b-b57d81cc470d", 00:05:17.008 "assigned_rate_limits": { 00:05:17.008 "rw_ios_per_sec": 0, 00:05:17.008 "rw_mbytes_per_sec": 0, 00:05:17.008 "r_mbytes_per_sec": 0, 00:05:17.008 "w_mbytes_per_sec": 0 00:05:17.008 }, 00:05:17.008 "claimed": false, 00:05:17.008 "zoned": false, 00:05:17.008 "supported_io_types": { 00:05:17.008 "read": true, 00:05:17.008 "write": true, 00:05:17.008 "unmap": true, 00:05:17.008 "flush": true, 00:05:17.008 "reset": true, 00:05:17.009 "nvme_admin": false, 00:05:17.009 "nvme_io": false, 00:05:17.009 "nvme_io_md": false, 00:05:17.009 "write_zeroes": true, 00:05:17.009 "zcopy": true, 00:05:17.009 "get_zone_info": false, 00:05:17.009 "zone_management": false, 00:05:17.009 "zone_append": false, 00:05:17.009 "compare": false, 00:05:17.009 "compare_and_write": false, 00:05:17.009 "abort": true, 00:05:17.009 "seek_hole": false, 00:05:17.009 "seek_data": false, 00:05:17.009 "copy": true, 00:05:17.009 "nvme_iov_md": false 00:05:17.009 }, 00:05:17.009 "memory_domains": [ 00:05:17.009 { 00:05:17.009 "dma_device_id": "system", 00:05:17.009 "dma_device_type": 1 00:05:17.009 }, 00:05:17.009 { 00:05:17.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.009 "dma_device_type": 2 00:05:17.009 } 00:05:17.009 ], 00:05:17.009 "driver_specific": { 00:05:17.009 "passthru": { 00:05:17.009 "name": "Passthru0", 00:05:17.009 "base_bdev_name": "Malloc2" 00:05:17.009 } 00:05:17.009 } 00:05:17.009 } 00:05:17.009 ]' 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.009 00:05:17.009 real 0m0.343s 00:05:17.009 user 0m0.220s 00:05:17.009 sys 0m0.041s 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.009 12:47:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.009 ************************************ 00:05:17.009 END TEST rpc_daemon_integrity 00:05:17.009 ************************************ 00:05:17.009 12:47:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.009 12:47:08 rpc -- rpc/rpc.sh@84 -- # killprocess 68881 00:05:17.009 12:47:08 rpc -- common/autotest_common.sh@946 -- # '[' -z 68881 ']' 00:05:17.009 12:47:08 rpc -- common/autotest_common.sh@950 -- # kill -0 68881 00:05:17.009 12:47:08 rpc -- common/autotest_common.sh@951 -- # uname 00:05:17.009 12:47:08 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.009 12:47:08 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68881 00:05:17.267 12:47:08 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.267 killing process with pid 68881 00:05:17.268 12:47:08 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.268 12:47:08 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68881' 00:05:17.268 12:47:08 rpc -- common/autotest_common.sh@965 -- # kill 68881 00:05:17.268 12:47:08 rpc -- common/autotest_common.sh@970 -- # wait 68881 00:05:17.526 00:05:17.526 real 0m2.834s 00:05:17.526 user 0m3.758s 00:05:17.526 sys 0m0.674s 00:05:17.526 12:47:08 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.526 ************************************ 00:05:17.526 END TEST rpc 00:05:17.526 ************************************ 00:05:17.526 12:47:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.526 12:47:08 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:17.526 12:47:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.526 12:47:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.526 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:05:17.526 ************************************ 00:05:17.526 START TEST skip_rpc 00:05:17.526 ************************************ 00:05:17.526 12:47:08 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:17.526 * Looking for test storage... 00:05:17.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.526 12:47:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.526 12:47:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.526 12:47:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:17.526 12:47:09 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.526 12:47:09 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.526 12:47:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.526 ************************************ 00:05:17.526 START TEST skip_rpc 00:05:17.526 ************************************ 00:05:17.526 12:47:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:17.526 12:47:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69080 00:05:17.526 12:47:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.526 12:47:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:17.526 12:47:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:17.785 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:17.785 [2024-08-11 12:47:09.169403] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:17.785 [2024-08-11 12:47:09.169593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69080 ] 00:05:17.785 [2024-08-11 12:47:09.307673] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.785 [2024-08-11 12:47:09.339757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@646 -- # local es=0 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # rpc_cmd spdk_get_version 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # es=1 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69080 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 69080 ']' 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 69080 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69080 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.054 killing process with pid 69080 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69080' 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 69080 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 69080 00:05:23.054 00:05:23.054 real 0m5.321s 00:05:23.054 user 0m4.981s 00:05:23.054 sys 0m0.244s 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.054 12:47:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.054 ************************************ 00:05:23.054 END TEST skip_rpc 00:05:23.054 ************************************ 00:05:23.054 12:47:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:23.054 12:47:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.054 12:47:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.054 12:47:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.054 ************************************ 00:05:23.054 START TEST skip_rpc_with_json 00:05:23.054 ************************************ 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69162 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69162 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 69162 ']' 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.054 12:47:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.054 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:23.054 [2024-08-11 12:47:14.540071] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:23.054 [2024-08-11 12:47:14.540913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69162 ] 00:05:23.313 [2024-08-11 12:47:14.685587] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.313 [2024-08-11 12:47:14.719421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.276 [2024-08-11 12:47:15.540125] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:24.276 request: 00:05:24.276 { 00:05:24.276 "trtype": "tcp", 00:05:24.276 "method": "nvmf_get_transports", 00:05:24.276 "req_id": 1 00:05:24.276 } 00:05:24.276 Got JSON-RPC error response 00:05:24.276 response: 00:05:24.276 { 00:05:24.276 "code": -19, 00:05:24.276 "message": "No such device" 00:05:24.276 } 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.276 [2024-08-11 12:47:15.552358] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:24.276 12:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:24.277 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:24.277 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.277 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:24.277 12:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.277 { 00:05:24.277 "subsystems": [ 00:05:24.277 { 00:05:24.277 "subsystem": "keyring", 00:05:24.277 "config": [] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "iobuf", 00:05:24.277 "config": [ 00:05:24.277 { 00:05:24.277 "method": "iobuf_set_options", 00:05:24.277 "params": { 00:05:24.277 "small_pool_count": 8192, 00:05:24.277 "large_pool_count": 1024, 00:05:24.277 "small_bufsize": 8192, 00:05:24.277 "large_bufsize": 135168 00:05:24.277 } 00:05:24.277 } 00:05:24.277 ] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "sock", 00:05:24.277 "config": [ 00:05:24.277 { 00:05:24.277 "method": "sock_set_default_impl", 00:05:24.277 "params": { 00:05:24.277 "impl_name": "posix" 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "sock_impl_set_options", 00:05:24.277 "params": { 00:05:24.277 "impl_name": "ssl", 00:05:24.277 "recv_buf_size": 4096, 00:05:24.277 "send_buf_size": 4096, 00:05:24.277 "enable_recv_pipe": true, 00:05:24.277 "enable_quickack": false, 00:05:24.277 "enable_placement_id": 0, 00:05:24.277 "enable_zerocopy_send_server": true, 00:05:24.277 "enable_zerocopy_send_client": false, 00:05:24.277 "zerocopy_threshold": 0, 00:05:24.277 "tls_version": 0, 00:05:24.277 "enable_ktls": false 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "sock_impl_set_options", 00:05:24.277 "params": { 00:05:24.277 "impl_name": "posix", 00:05:24.277 "recv_buf_size": 2097152, 00:05:24.277 "send_buf_size": 2097152, 00:05:24.277 "enable_recv_pipe": true, 00:05:24.277 "enable_quickack": false, 00:05:24.277 "enable_placement_id": 0, 00:05:24.277 "enable_zerocopy_send_server": true, 00:05:24.277 "enable_zerocopy_send_client": false, 00:05:24.277 "zerocopy_threshold": 0, 00:05:24.277 "tls_version": 0, 00:05:24.277 "enable_ktls": false 00:05:24.277 } 00:05:24.277 } 00:05:24.277 ] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "vmd", 00:05:24.277 "config": [] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "accel", 00:05:24.277 "config": [ 00:05:24.277 { 00:05:24.277 "method": "accel_set_options", 00:05:24.277 "params": { 00:05:24.277 "small_cache_size": 128, 00:05:24.277 "large_cache_size": 16, 00:05:24.277 "task_count": 2048, 00:05:24.277 "sequence_count": 2048, 00:05:24.277 "buf_count": 2048 00:05:24.277 } 00:05:24.277 } 00:05:24.277 ] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "bdev", 00:05:24.277 "config": [ 00:05:24.277 { 00:05:24.277 "method": "bdev_set_options", 00:05:24.277 "params": { 00:05:24.277 "bdev_io_pool_size": 65535, 00:05:24.277 "bdev_io_cache_size": 256, 00:05:24.277 "bdev_auto_examine": true, 00:05:24.277 "iobuf_small_cache_size": 128, 00:05:24.277 "iobuf_large_cache_size": 16 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "bdev_raid_set_options", 00:05:24.277 "params": { 00:05:24.277 "process_window_size_kb": 1024, 00:05:24.277 "process_max_bandwidth_mb_sec": 0 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "bdev_iscsi_set_options", 00:05:24.277 "params": { 00:05:24.277 "timeout_sec": 30 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "bdev_nvme_set_options", 00:05:24.277 "params": { 00:05:24.277 "action_on_timeout": "none", 00:05:24.277 "timeout_us": 0, 00:05:24.277 "timeout_admin_us": 0, 00:05:24.277 "keep_alive_timeout_ms": 10000, 00:05:24.277 "arbitration_burst": 0, 00:05:24.277 "low_priority_weight": 0, 00:05:24.277 "medium_priority_weight": 0, 00:05:24.277 "high_priority_weight": 0, 00:05:24.277 "nvme_adminq_poll_period_us": 10000, 00:05:24.277 "nvme_ioq_poll_period_us": 0, 00:05:24.277 "io_queue_requests": 0, 00:05:24.277 "delay_cmd_submit": true, 00:05:24.277 "transport_retry_count": 4, 00:05:24.277 "bdev_retry_count": 3, 00:05:24.277 "transport_ack_timeout": 0, 00:05:24.277 "ctrlr_loss_timeout_sec": 0, 00:05:24.277 "reconnect_delay_sec": 0, 00:05:24.277 "fast_io_fail_timeout_sec": 0, 00:05:24.277 "disable_auto_failback": false, 00:05:24.277 "generate_uuids": false, 00:05:24.277 "transport_tos": 0, 00:05:24.277 "nvme_error_stat": false, 00:05:24.277 "rdma_srq_size": 0, 00:05:24.277 "io_path_stat": false, 00:05:24.277 "allow_accel_sequence": false, 00:05:24.277 "rdma_max_cq_size": 0, 00:05:24.277 "rdma_cm_event_timeout_ms": 0, 00:05:24.277 "dhchap_digests": [ 00:05:24.277 "sha256", 00:05:24.277 "sha384", 00:05:24.277 "sha512" 00:05:24.277 ], 00:05:24.277 "dhchap_dhgroups": [ 00:05:24.277 "null", 00:05:24.277 "ffdhe2048", 00:05:24.277 "ffdhe3072", 00:05:24.277 "ffdhe4096", 00:05:24.277 "ffdhe6144", 00:05:24.277 "ffdhe8192" 00:05:24.277 ] 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "bdev_nvme_set_hotplug", 00:05:24.277 "params": { 00:05:24.277 "period_us": 100000, 00:05:24.277 "enable": false 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "bdev_wait_for_examine" 00:05:24.277 } 00:05:24.277 ] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "scsi", 00:05:24.277 "config": null 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "scheduler", 00:05:24.277 "config": [ 00:05:24.277 { 00:05:24.277 "method": "framework_set_scheduler", 00:05:24.277 "params": { 00:05:24.277 "name": "static" 00:05:24.277 } 00:05:24.277 } 00:05:24.277 ] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "vhost_scsi", 00:05:24.277 "config": [] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "vhost_blk", 00:05:24.277 "config": [] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "ublk", 00:05:24.277 "config": [] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "nbd", 00:05:24.277 "config": [] 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "subsystem": "nvmf", 00:05:24.277 "config": [ 00:05:24.277 { 00:05:24.277 "method": "nvmf_set_config", 00:05:24.277 "params": { 00:05:24.277 "discovery_filter": "match_any", 00:05:24.277 "admin_cmd_passthru": { 00:05:24.277 "identify_ctrlr": false 00:05:24.277 } 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "nvmf_set_max_subsystems", 00:05:24.277 "params": { 00:05:24.277 "max_subsystems": 1024 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "nvmf_set_crdt", 00:05:24.277 "params": { 00:05:24.277 "crdt1": 0, 00:05:24.277 "crdt2": 0, 00:05:24.277 "crdt3": 0 00:05:24.277 } 00:05:24.277 }, 00:05:24.277 { 00:05:24.277 "method": "nvmf_create_transport", 00:05:24.277 "params": { 00:05:24.277 "trtype": "TCP", 00:05:24.277 "max_queue_depth": 128, 00:05:24.277 "max_io_qpairs_per_ctrlr": 127, 00:05:24.277 "in_capsule_data_size": 4096, 00:05:24.277 "max_io_size": 131072, 00:05:24.277 "io_unit_size": 131072, 00:05:24.277 "max_aq_depth": 128, 00:05:24.277 "num_shared_buffers": 511, 00:05:24.277 "buf_cache_size": 4294967295, 00:05:24.277 "dif_insert_or_strip": false, 00:05:24.277 "zcopy": false, 00:05:24.277 "c2h_success": true, 00:05:24.277 "sock_priority": 0, 00:05:24.277 "abort_timeout_sec": 1, 00:05:24.277 "ack_timeout": 0, 00:05:24.277 "data_wr_pool_size": 0 00:05:24.277 } 00:05:24.277 } 00:05:24.278 ] 00:05:24.278 }, 00:05:24.278 { 00:05:24.278 "subsystem": "iscsi", 00:05:24.278 "config": [ 00:05:24.278 { 00:05:24.278 "method": "iscsi_set_options", 00:05:24.278 "params": { 00:05:24.278 "node_base": "iqn.2016-06.io.spdk", 00:05:24.278 "max_sessions": 128, 00:05:24.278 "max_connections_per_session": 2, 00:05:24.278 "max_queue_depth": 64, 00:05:24.278 "default_time2wait": 2, 00:05:24.278 "default_time2retain": 20, 00:05:24.278 "first_burst_length": 8192, 00:05:24.278 "immediate_data": true, 00:05:24.278 "allow_duplicated_isid": false, 00:05:24.278 "error_recovery_level": 0, 00:05:24.278 "nop_timeout": 60, 00:05:24.278 "nop_in_interval": 30, 00:05:24.278 "disable_chap": false, 00:05:24.278 "require_chap": false, 00:05:24.278 "mutual_chap": false, 00:05:24.278 "chap_group": 0, 00:05:24.278 "max_large_datain_per_connection": 64, 00:05:24.278 "max_r2t_per_connection": 4, 00:05:24.278 "pdu_pool_size": 36864, 00:05:24.278 "immediate_data_pool_size": 16384, 00:05:24.278 "data_out_pool_size": 2048 00:05:24.278 } 00:05:24.278 } 00:05:24.278 ] 00:05:24.278 } 00:05:24.278 ] 00:05:24.278 } 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69162 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 69162 ']' 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 69162 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69162 00:05:24.278 killing process with pid 69162 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69162' 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 69162 00:05:24.278 12:47:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 69162 00:05:24.539 12:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69190 00:05:24.539 12:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.539 12:47:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69190 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 69190 ']' 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 69190 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69190 00:05:29.805 killing process with pid 69190 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69190' 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 69190 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 69190 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:29.805 ************************************ 00:05:29.805 END TEST skip_rpc_with_json 00:05:29.805 ************************************ 00:05:29.805 00:05:29.805 real 0m6.942s 00:05:29.805 user 0m6.821s 00:05:29.805 sys 0m0.567s 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.805 12:47:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.063 12:47:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.063 12:47:21 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.063 12:47:21 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.063 12:47:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.063 ************************************ 00:05:30.063 START TEST skip_rpc_with_delay 00:05:30.063 ************************************ 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # local es=0 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:30.063 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.064 [2024-08-11 12:47:21.514371] app.c: 833:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.064 [2024-08-11 12:47:21.514524] app.c: 712:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # es=1 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:30.064 ************************************ 00:05:30.064 END TEST skip_rpc_with_delay 00:05:30.064 ************************************ 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:30.064 00:05:30.064 real 0m0.140s 00:05:30.064 user 0m0.071s 00:05:30.064 sys 0m0.068s 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.064 12:47:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.064 12:47:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.064 12:47:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.064 12:47:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.064 12:47:21 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.064 12:47:21 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.064 12:47:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.064 ************************************ 00:05:30.064 START TEST exit_on_failed_rpc_init 00:05:30.064 ************************************ 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69302 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69302 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 69302 ']' 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:30.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:30.064 12:47:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.322 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:30.322 [2024-08-11 12:47:21.707645] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:30.322 [2024-08-11 12:47:21.707764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69302 ] 00:05:30.322 [2024-08-11 12:47:21.844581] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.322 [2024-08-11 12:47:21.881458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.581 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.581 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:30.581 12:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.581 12:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.581 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # local es=0 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.582 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.582 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:30.582 [2024-08-11 12:47:22.163082] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:30.582 [2024-08-11 12:47:22.163248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69312 ] 00:05:30.840 [2024-08-11 12:47:22.318555] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.840 [2024-08-11 12:47:22.363136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.840 [2024-08-11 12:47:22.363257] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:30.840 [2024-08-11 12:47:22.363295] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:30.840 [2024-08-11 12:47:22.363330] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # es=234 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@658 -- # es=106 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # case "$es" in 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@666 -- # es=1 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69302 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 69302 ']' 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 69302 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69302 00:05:31.099 killing process with pid 69302 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69302' 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 69302 00:05:31.099 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 69302 00:05:31.357 00:05:31.357 real 0m1.182s 00:05:31.357 user 0m1.390s 00:05:31.357 sys 0m0.376s 00:05:31.357 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.357 12:47:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.357 ************************************ 00:05:31.357 END TEST exit_on_failed_rpc_init 00:05:31.357 ************************************ 00:05:31.357 12:47:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:31.357 00:05:31.357 real 0m13.899s 00:05:31.357 user 0m13.361s 00:05:31.357 sys 0m1.446s 00:05:31.357 12:47:22 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.357 12:47:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.357 ************************************ 00:05:31.357 END TEST skip_rpc 00:05:31.357 ************************************ 00:05:31.357 12:47:22 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.357 12:47:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.357 12:47:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.357 12:47:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.357 ************************************ 00:05:31.357 START TEST rpc_client 00:05:31.357 ************************************ 00:05:31.357 12:47:22 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.616 * Looking for test storage... 00:05:31.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:31.616 12:47:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:31.616 OK 00:05:31.616 12:47:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.616 00:05:31.616 real 0m0.125s 00:05:31.616 user 0m0.062s 00:05:31.616 sys 0m0.070s 00:05:31.616 12:47:23 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.616 ************************************ 00:05:31.616 END TEST rpc_client 00:05:31.616 ************************************ 00:05:31.616 12:47:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 12:47:23 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.616 12:47:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.616 12:47:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.616 12:47:23 -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 ************************************ 00:05:31.616 START TEST json_config 00:05:31.616 ************************************ 00:05:31.616 12:47:23 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.616 12:47:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1f8a89f1-be0b-44bd-ab10-6c9c6254bb74 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1f8a89f1-be0b-44bd-ab10-6c9c6254bb74 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.616 12:47:23 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.616 12:47:23 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.616 12:47:23 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.616 12:47:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.616 12:47:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.616 12:47:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.616 12:47:23 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.616 12:47:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@47 -- # : 0 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.616 12:47:23 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.616 12:47:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:31.616 12:47:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.616 12:47:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.616 12:47:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.616 12:47:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.616 WARNING: No tests are enabled so not running JSON configuration tests 00:05:31.616 12:47:23 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:31.616 12:47:23 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:31.616 00:05:31.616 real 0m0.083s 00:05:31.616 user 0m0.035s 00:05:31.616 sys 0m0.048s 00:05:31.616 12:47:23 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.616 ************************************ 00:05:31.616 END TEST json_config 00:05:31.616 ************************************ 00:05:31.616 12:47:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 12:47:23 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:31.616 12:47:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.616 12:47:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.616 12:47:23 -- common/autotest_common.sh@10 -- # set +x 00:05:31.874 ************************************ 00:05:31.874 START TEST json_config_extra_key 00:05:31.874 ************************************ 00:05:31.874 12:47:23 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1f8a89f1-be0b-44bd-ab10-6c9c6254bb74 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1f8a89f1-be0b-44bd-ab10-6c9c6254bb74 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.874 12:47:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.874 12:47:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.874 12:47:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.874 12:47:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.874 12:47:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.874 12:47:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.874 12:47:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:31.874 12:47:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.874 12:47:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:31.874 INFO: launching applications... 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:31.874 12:47:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69471 00:05:31.874 Waiting for target to run... 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69471 /var/tmp/spdk_tgt.sock 00:05:31.874 12:47:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:31.874 12:47:23 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 69471 ']' 00:05:31.874 12:47:23 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.875 12:47:23 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.875 12:47:23 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.875 12:47:23 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.875 12:47:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.875 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:31.875 [2024-08-11 12:47:23.395984] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:31.875 [2024-08-11 12:47:23.396156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69471 ] 00:05:32.133 [2024-08-11 12:47:23.712263] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.390 [2024-08-11 12:47:23.735233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.957 12:47:24 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.957 00:05:32.957 12:47:24 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:32.957 12:47:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:32.957 INFO: shutting down applications... 00:05:32.957 12:47:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:32.957 12:47:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:32.957 12:47:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:32.958 12:47:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.958 12:47:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69471 ]] 00:05:32.958 12:47:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69471 00:05:32.958 12:47:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.958 12:47:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.958 12:47:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69471 00:05:32.958 12:47:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.525 12:47:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.525 12:47:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.525 12:47:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69471 00:05:33.525 12:47:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:33.525 12:47:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:33.525 12:47:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:33.525 SPDK target shutdown done 00:05:33.525 Success 00:05:33.525 12:47:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:33.525 12:47:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:33.525 00:05:33.525 real 0m1.666s 00:05:33.525 user 0m1.521s 00:05:33.525 sys 0m0.396s 00:05:33.525 12:47:24 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.525 ************************************ 00:05:33.525 END TEST json_config_extra_key 00:05:33.525 ************************************ 00:05:33.525 12:47:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.525 12:47:24 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.525 12:47:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.525 12:47:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.525 12:47:24 -- common/autotest_common.sh@10 -- # set +x 00:05:33.525 ************************************ 00:05:33.525 START TEST alias_rpc 00:05:33.525 ************************************ 00:05:33.525 12:47:24 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.525 * Looking for test storage... 00:05:33.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:33.525 12:47:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.525 12:47:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69536 00:05:33.525 12:47:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69536 00:05:33.525 12:47:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.525 12:47:25 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 69536 ']' 00:05:33.525 12:47:25 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.525 12:47:25 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:33.525 12:47:25 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.525 12:47:25 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:33.525 12:47:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.525 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:33.525 [2024-08-11 12:47:25.116239] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:33.525 [2024-08-11 12:47:25.116471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69536 ] 00:05:33.783 [2024-08-11 12:47:25.263305] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.783 [2024-08-11 12:47:25.296562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.719 12:47:26 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:34.719 12:47:26 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:34.719 12:47:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:34.719 12:47:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69536 00:05:34.719 12:47:26 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 69536 ']' 00:05:34.719 12:47:26 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 69536 00:05:34.719 12:47:26 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:34.719 12:47:26 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:34.978 12:47:26 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69536 00:05:34.978 12:47:26 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:34.978 killing process with pid 69536 00:05:34.978 12:47:26 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:34.978 12:47:26 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69536' 00:05:34.978 12:47:26 alias_rpc -- common/autotest_common.sh@965 -- # kill 69536 00:05:34.978 12:47:26 alias_rpc -- common/autotest_common.sh@970 -- # wait 69536 00:05:35.237 00:05:35.237 real 0m1.685s 00:05:35.237 user 0m2.004s 00:05:35.237 sys 0m0.361s 00:05:35.237 12:47:26 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.237 ************************************ 00:05:35.237 END TEST alias_rpc 00:05:35.237 ************************************ 00:05:35.237 12:47:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.237 12:47:26 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:35.237 12:47:26 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:35.237 12:47:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.237 12:47:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.237 12:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:35.237 ************************************ 00:05:35.237 START TEST spdkcli_tcp 00:05:35.237 ************************************ 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:35.237 * Looking for test storage... 00:05:35.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69608 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69608 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 69608 ']' 00:05:35.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.237 12:47:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.237 12:47:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.496 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:35.496 [2024-08-11 12:47:26.879142] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:35.496 [2024-08-11 12:47:26.879342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69608 ] 00:05:35.496 [2024-08-11 12:47:27.024832] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.496 [2024-08-11 12:47:27.061092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.496 [2024-08-11 12:47:27.061131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.431 12:47:27 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.431 12:47:27 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:36.431 12:47:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:36.431 12:47:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69625 00:05:36.431 12:47:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:36.431 [ 00:05:36.431 "bdev_malloc_delete", 00:05:36.431 "bdev_malloc_create", 00:05:36.431 "bdev_null_resize", 00:05:36.431 "bdev_null_delete", 00:05:36.431 "bdev_null_create", 00:05:36.431 "bdev_nvme_cuse_unregister", 00:05:36.431 "bdev_nvme_cuse_register", 00:05:36.431 "bdev_opal_new_user", 00:05:36.431 "bdev_opal_set_lock_state", 00:05:36.431 "bdev_opal_delete", 00:05:36.431 "bdev_opal_get_info", 00:05:36.431 "bdev_opal_create", 00:05:36.431 "bdev_nvme_opal_revert", 00:05:36.431 "bdev_nvme_opal_init", 00:05:36.431 "bdev_nvme_send_cmd", 00:05:36.431 "bdev_nvme_get_path_iostat", 00:05:36.431 "bdev_nvme_get_mdns_discovery_info", 00:05:36.431 "bdev_nvme_stop_mdns_discovery", 00:05:36.431 "bdev_nvme_start_mdns_discovery", 00:05:36.432 "bdev_nvme_set_multipath_policy", 00:05:36.432 "bdev_nvme_set_preferred_path", 00:05:36.432 "bdev_nvme_get_io_paths", 00:05:36.432 "bdev_nvme_remove_error_injection", 00:05:36.432 "bdev_nvme_add_error_injection", 00:05:36.432 "bdev_nvme_get_discovery_info", 00:05:36.432 "bdev_nvme_stop_discovery", 00:05:36.432 "bdev_nvme_start_discovery", 00:05:36.432 "bdev_nvme_get_controller_health_info", 00:05:36.432 "bdev_nvme_disable_controller", 00:05:36.432 "bdev_nvme_enable_controller", 00:05:36.432 "bdev_nvme_reset_controller", 00:05:36.432 "bdev_nvme_get_transport_statistics", 00:05:36.432 "bdev_nvme_apply_firmware", 00:05:36.432 "bdev_nvme_detach_controller", 00:05:36.432 "bdev_nvme_get_controllers", 00:05:36.432 "bdev_nvme_attach_controller", 00:05:36.432 "bdev_nvme_set_hotplug", 00:05:36.432 "bdev_nvme_set_options", 00:05:36.432 "bdev_passthru_delete", 00:05:36.432 "bdev_passthru_create", 00:05:36.432 "bdev_lvol_set_parent_bdev", 00:05:36.432 "bdev_lvol_set_parent", 00:05:36.432 "bdev_lvol_check_shallow_copy", 00:05:36.432 "bdev_lvol_start_shallow_copy", 00:05:36.432 "bdev_lvol_grow_lvstore", 00:05:36.432 "bdev_lvol_get_lvols", 00:05:36.432 "bdev_lvol_get_lvstores", 00:05:36.432 "bdev_lvol_delete", 00:05:36.432 "bdev_lvol_set_read_only", 00:05:36.432 "bdev_lvol_resize", 00:05:36.432 "bdev_lvol_decouple_parent", 00:05:36.432 "bdev_lvol_inflate", 00:05:36.432 "bdev_lvol_rename", 00:05:36.432 "bdev_lvol_clone_bdev", 00:05:36.432 "bdev_lvol_clone", 00:05:36.432 "bdev_lvol_snapshot", 00:05:36.432 "bdev_lvol_create", 00:05:36.432 "bdev_lvol_delete_lvstore", 00:05:36.432 "bdev_lvol_rename_lvstore", 00:05:36.432 "bdev_lvol_create_lvstore", 00:05:36.432 "bdev_raid_set_options", 00:05:36.432 "bdev_raid_remove_base_bdev", 00:05:36.432 "bdev_raid_add_base_bdev", 00:05:36.432 "bdev_raid_delete", 00:05:36.432 "bdev_raid_create", 00:05:36.432 "bdev_raid_get_bdevs", 00:05:36.432 "bdev_error_inject_error", 00:05:36.432 "bdev_error_delete", 00:05:36.432 "bdev_error_create", 00:05:36.432 "bdev_split_delete", 00:05:36.432 "bdev_split_create", 00:05:36.432 "bdev_delay_delete", 00:05:36.432 "bdev_delay_create", 00:05:36.432 "bdev_delay_update_latency", 00:05:36.432 "bdev_zone_block_delete", 00:05:36.432 "bdev_zone_block_create", 00:05:36.432 "blobfs_create", 00:05:36.432 "blobfs_detect", 00:05:36.432 "blobfs_set_cache_size", 00:05:36.432 "bdev_xnvme_delete", 00:05:36.432 "bdev_xnvme_create", 00:05:36.432 "bdev_aio_delete", 00:05:36.432 "bdev_aio_rescan", 00:05:36.432 "bdev_aio_create", 00:05:36.432 "bdev_ftl_set_property", 00:05:36.432 "bdev_ftl_get_properties", 00:05:36.432 "bdev_ftl_get_stats", 00:05:36.432 "bdev_ftl_unmap", 00:05:36.432 "bdev_ftl_unload", 00:05:36.432 "bdev_ftl_delete", 00:05:36.432 "bdev_ftl_load", 00:05:36.432 "bdev_ftl_create", 00:05:36.432 "bdev_virtio_attach_controller", 00:05:36.432 "bdev_virtio_scsi_get_devices", 00:05:36.432 "bdev_virtio_detach_controller", 00:05:36.432 "bdev_virtio_blk_set_hotplug", 00:05:36.432 "bdev_iscsi_delete", 00:05:36.432 "bdev_iscsi_create", 00:05:36.432 "bdev_iscsi_set_options", 00:05:36.432 "accel_error_inject_error", 00:05:36.432 "ioat_scan_accel_module", 00:05:36.432 "dsa_scan_accel_module", 00:05:36.432 "iaa_scan_accel_module", 00:05:36.432 "keyring_file_remove_key", 00:05:36.432 "keyring_file_add_key", 00:05:36.432 "keyring_linux_set_options", 00:05:36.432 "iscsi_get_histogram", 00:05:36.432 "iscsi_enable_histogram", 00:05:36.432 "iscsi_set_options", 00:05:36.432 "iscsi_get_auth_groups", 00:05:36.432 "iscsi_auth_group_remove_secret", 00:05:36.432 "iscsi_auth_group_add_secret", 00:05:36.432 "iscsi_delete_auth_group", 00:05:36.432 "iscsi_create_auth_group", 00:05:36.432 "iscsi_set_discovery_auth", 00:05:36.432 "iscsi_get_options", 00:05:36.432 "iscsi_target_node_request_logout", 00:05:36.432 "iscsi_target_node_set_redirect", 00:05:36.432 "iscsi_target_node_set_auth", 00:05:36.432 "iscsi_target_node_add_lun", 00:05:36.432 "iscsi_get_stats", 00:05:36.432 "iscsi_get_connections", 00:05:36.432 "iscsi_portal_group_set_auth", 00:05:36.432 "iscsi_start_portal_group", 00:05:36.432 "iscsi_delete_portal_group", 00:05:36.432 "iscsi_create_portal_group", 00:05:36.432 "iscsi_get_portal_groups", 00:05:36.432 "iscsi_delete_target_node", 00:05:36.432 "iscsi_target_node_remove_pg_ig_maps", 00:05:36.432 "iscsi_target_node_add_pg_ig_maps", 00:05:36.432 "iscsi_create_target_node", 00:05:36.432 "iscsi_get_target_nodes", 00:05:36.432 "iscsi_delete_initiator_group", 00:05:36.432 "iscsi_initiator_group_remove_initiators", 00:05:36.432 "iscsi_initiator_group_add_initiators", 00:05:36.432 "iscsi_create_initiator_group", 00:05:36.432 "iscsi_get_initiator_groups", 00:05:36.432 "nvmf_set_crdt", 00:05:36.432 "nvmf_set_config", 00:05:36.432 "nvmf_set_max_subsystems", 00:05:36.432 "nvmf_stop_mdns_prr", 00:05:36.432 "nvmf_publish_mdns_prr", 00:05:36.432 "nvmf_subsystem_get_listeners", 00:05:36.432 "nvmf_subsystem_get_qpairs", 00:05:36.432 "nvmf_subsystem_get_controllers", 00:05:36.432 "nvmf_get_stats", 00:05:36.432 "nvmf_get_transports", 00:05:36.432 "nvmf_create_transport", 00:05:36.432 "nvmf_get_targets", 00:05:36.432 "nvmf_delete_target", 00:05:36.432 "nvmf_create_target", 00:05:36.432 "nvmf_subsystem_allow_any_host", 00:05:36.432 "nvmf_subsystem_remove_host", 00:05:36.432 "nvmf_subsystem_add_host", 00:05:36.432 "nvmf_ns_remove_host", 00:05:36.432 "nvmf_ns_add_host", 00:05:36.432 "nvmf_subsystem_remove_ns", 00:05:36.432 "nvmf_subsystem_add_ns", 00:05:36.432 "nvmf_subsystem_listener_set_ana_state", 00:05:36.432 "nvmf_discovery_get_referrals", 00:05:36.432 "nvmf_discovery_remove_referral", 00:05:36.432 "nvmf_discovery_add_referral", 00:05:36.432 "nvmf_subsystem_remove_listener", 00:05:36.432 "nvmf_subsystem_add_listener", 00:05:36.432 "nvmf_delete_subsystem", 00:05:36.432 "nvmf_create_subsystem", 00:05:36.432 "nvmf_get_subsystems", 00:05:36.432 "env_dpdk_get_mem_stats", 00:05:36.432 "nbd_get_disks", 00:05:36.432 "nbd_stop_disk", 00:05:36.432 "nbd_start_disk", 00:05:36.432 "ublk_recover_disk", 00:05:36.432 "ublk_get_disks", 00:05:36.432 "ublk_stop_disk", 00:05:36.432 "ublk_start_disk", 00:05:36.432 "ublk_destroy_target", 00:05:36.432 "ublk_create_target", 00:05:36.432 "virtio_blk_create_transport", 00:05:36.432 "virtio_blk_get_transports", 00:05:36.432 "vhost_controller_set_coalescing", 00:05:36.432 "vhost_get_controllers", 00:05:36.432 "vhost_delete_controller", 00:05:36.432 "vhost_create_blk_controller", 00:05:36.432 "vhost_scsi_controller_remove_target", 00:05:36.432 "vhost_scsi_controller_add_target", 00:05:36.432 "vhost_start_scsi_controller", 00:05:36.432 "vhost_create_scsi_controller", 00:05:36.432 "thread_set_cpumask", 00:05:36.432 "framework_get_governor", 00:05:36.432 "framework_get_scheduler", 00:05:36.432 "framework_set_scheduler", 00:05:36.432 "framework_get_reactors", 00:05:36.432 "thread_get_io_channels", 00:05:36.432 "thread_get_pollers", 00:05:36.432 "thread_get_stats", 00:05:36.432 "framework_monitor_context_switch", 00:05:36.432 "spdk_kill_instance", 00:05:36.432 "log_enable_timestamps", 00:05:36.432 "log_get_flags", 00:05:36.432 "log_clear_flag", 00:05:36.432 "log_set_flag", 00:05:36.432 "log_get_level", 00:05:36.432 "log_set_level", 00:05:36.432 "log_get_print_level", 00:05:36.432 "log_set_print_level", 00:05:36.432 "framework_enable_cpumask_locks", 00:05:36.432 "framework_disable_cpumask_locks", 00:05:36.432 "framework_wait_init", 00:05:36.432 "framework_start_init", 00:05:36.432 "scsi_get_devices", 00:05:36.432 "bdev_get_histogram", 00:05:36.432 "bdev_enable_histogram", 00:05:36.432 "bdev_set_qos_limit", 00:05:36.432 "bdev_set_qd_sampling_period", 00:05:36.432 "bdev_get_bdevs", 00:05:36.432 "bdev_reset_iostat", 00:05:36.432 "bdev_get_iostat", 00:05:36.432 "bdev_examine", 00:05:36.432 "bdev_wait_for_examine", 00:05:36.432 "bdev_set_options", 00:05:36.432 "notify_get_notifications", 00:05:36.432 "notify_get_types", 00:05:36.432 "accel_get_stats", 00:05:36.432 "accel_set_options", 00:05:36.432 "accel_set_driver", 00:05:36.432 "accel_crypto_key_destroy", 00:05:36.432 "accel_crypto_keys_get", 00:05:36.432 "accel_crypto_key_create", 00:05:36.432 "accel_assign_opc", 00:05:36.432 "accel_get_module_info", 00:05:36.432 "accel_get_opc_assignments", 00:05:36.432 "vmd_rescan", 00:05:36.432 "vmd_remove_device", 00:05:36.432 "vmd_enable", 00:05:36.432 "sock_get_default_impl", 00:05:36.432 "sock_set_default_impl", 00:05:36.432 "sock_impl_set_options", 00:05:36.432 "sock_impl_get_options", 00:05:36.432 "iobuf_get_stats", 00:05:36.432 "iobuf_set_options", 00:05:36.432 "framework_get_pci_devices", 00:05:36.432 "framework_get_config", 00:05:36.432 "framework_get_subsystems", 00:05:36.432 "trace_get_info", 00:05:36.432 "trace_get_tpoint_group_mask", 00:05:36.432 "trace_disable_tpoint_group", 00:05:36.432 "trace_enable_tpoint_group", 00:05:36.432 "trace_clear_tpoint_mask", 00:05:36.432 "trace_set_tpoint_mask", 00:05:36.432 "keyring_get_keys", 00:05:36.432 "spdk_get_version", 00:05:36.432 "rpc_get_methods" 00:05:36.432 ] 00:05:36.691 12:47:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.691 12:47:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:36.691 12:47:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69608 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 69608 ']' 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 69608 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69608 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.691 killing process with pid 69608 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69608' 00:05:36.691 12:47:28 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 69608 00:05:36.692 12:47:28 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 69608 00:05:36.950 00:05:36.950 real 0m1.710s 00:05:36.950 user 0m3.220s 00:05:36.950 sys 0m0.420s 00:05:36.950 12:47:28 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.950 ************************************ 00:05:36.950 END TEST spdkcli_tcp 00:05:36.950 12:47:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:36.950 ************************************ 00:05:36.950 12:47:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.950 12:47:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.950 12:47:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.950 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.950 ************************************ 00:05:36.950 START TEST dpdk_mem_utility 00:05:36.950 ************************************ 00:05:36.950 12:47:28 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.950 * Looking for test storage... 00:05:36.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:36.950 12:47:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:36.950 12:47:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69700 00:05:36.950 12:47:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69700 00:05:36.950 12:47:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.951 12:47:28 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 69700 ']' 00:05:36.951 12:47:28 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.951 12:47:28 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.951 12:47:28 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.951 12:47:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.951 12:47:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.209 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:37.209 [2024-08-11 12:47:28.629061] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:37.209 [2024-08-11 12:47:28.629257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69700 ] 00:05:37.209 [2024-08-11 12:47:28.776243] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.468 [2024-08-11 12:47:28.809711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.034 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:38.034 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:38.034 12:47:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:38.034 12:47:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:38.034 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:38.034 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.034 { 00:05:38.034 "filename": "/tmp/spdk_mem_dump.txt" 00:05:38.034 } 00:05:38.034 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:38.034 12:47:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.294 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:38.294 1 heaps totaling size 814.000000 MiB 00:05:38.294 size: 814.000000 MiB heap id: 0 00:05:38.294 end heaps---------- 00:05:38.294 8 mempools totaling size 598.116089 MiB 00:05:38.294 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:38.294 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:38.294 size: 84.521057 MiB name: bdev_io_69700 00:05:38.294 size: 51.011292 MiB name: evtpool_69700 00:05:38.294 size: 50.003479 MiB name: msgpool_69700 00:05:38.294 size: 21.763794 MiB name: PDU_Pool 00:05:38.294 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:38.294 size: 0.026123 MiB name: Session_Pool 00:05:38.294 end mempools------- 00:05:38.294 6 memzones totaling size 4.142822 MiB 00:05:38.294 size: 1.000366 MiB name: RG_ring_0_69700 00:05:38.294 size: 1.000366 MiB name: RG_ring_1_69700 00:05:38.294 size: 1.000366 MiB name: RG_ring_4_69700 00:05:38.294 size: 1.000366 MiB name: RG_ring_5_69700 00:05:38.294 size: 0.125366 MiB name: RG_ring_2_69700 00:05:38.294 size: 0.015991 MiB name: RG_ring_3_69700 00:05:38.294 end memzones------- 00:05:38.294 12:47:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:38.294 heap id: 0 total size: 814.000000 MiB number of busy elements: 304 number of free elements: 15 00:05:38.294 list of free elements. size: 12.471191 MiB 00:05:38.294 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:38.294 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:38.294 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:38.294 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:38.294 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:38.294 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:38.294 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:38.294 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:38.294 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:38.294 element at address: 0x20001aa00000 with size: 0.568237 MiB 00:05:38.294 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:38.294 element at address: 0x200000800000 with size: 0.486328 MiB 00:05:38.294 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:38.294 element at address: 0x200027e00000 with size: 0.395935 MiB 00:05:38.294 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:38.294 list of standard malloc elements. size: 199.266235 MiB 00:05:38.294 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:38.294 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:38.294 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:38.294 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:38.294 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:38.294 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:38.294 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:38.294 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:38.294 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:38.294 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:38.294 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:38.295 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:38.296 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e65680 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:38.296 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:38.296 list of memzone associated elements. size: 602.262573 MiB 00:05:38.296 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:38.296 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:38.296 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:38.296 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:38.296 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:38.296 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_69700_0 00:05:38.296 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:38.296 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69700_0 00:05:38.296 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:38.296 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69700_0 00:05:38.296 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:38.296 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:38.296 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:38.296 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:38.296 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:38.296 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69700 00:05:38.296 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:38.296 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69700 00:05:38.296 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:38.296 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69700 00:05:38.296 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:38.296 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:38.296 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:38.296 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:38.296 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:38.296 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:38.296 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:38.296 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:38.296 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:38.296 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69700 00:05:38.296 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:38.296 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69700 00:05:38.296 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:38.296 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69700 00:05:38.296 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:38.296 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69700 00:05:38.296 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:38.296 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69700 00:05:38.296 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:38.296 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:38.296 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:38.296 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:38.296 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:38.296 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:38.296 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:38.296 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69700 00:05:38.296 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:38.296 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:38.296 element at address: 0x200027e65740 with size: 0.023743 MiB 00:05:38.296 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:38.296 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:38.296 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69700 00:05:38.296 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:05:38.296 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:38.296 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:38.296 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69700 00:05:38.296 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:38.296 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69700 00:05:38.296 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:05:38.297 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:38.297 12:47:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:38.297 12:47:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69700 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 69700 ']' 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 69700 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69700 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.297 killing process with pid 69700 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69700' 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 69700 00:05:38.297 12:47:29 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 69700 00:05:38.556 00:05:38.556 real 0m1.586s 00:05:38.556 user 0m1.813s 00:05:38.556 sys 0m0.370s 00:05:38.556 12:47:30 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.556 ************************************ 00:05:38.556 END TEST dpdk_mem_utility 00:05:38.556 ************************************ 00:05:38.556 12:47:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.556 12:47:30 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:38.556 12:47:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.556 12:47:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.556 12:47:30 -- common/autotest_common.sh@10 -- # set +x 00:05:38.556 ************************************ 00:05:38.556 START TEST event 00:05:38.556 ************************************ 00:05:38.556 12:47:30 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:38.556 * Looking for test storage... 00:05:38.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:38.556 12:47:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:38.556 12:47:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:38.556 12:47:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.556 12:47:30 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:38.556 12:47:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.556 12:47:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.814 ************************************ 00:05:38.814 START TEST event_perf 00:05:38.814 ************************************ 00:05:38.814 12:47:30 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.814 Running I/O for 1 seconds...Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:38.814 [2024-08-11 12:47:30.186295] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:38.814 [2024-08-11 12:47:30.186494] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69772 ] 00:05:38.814 [2024-08-11 12:47:30.344011] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.814 Running I/O for 1 seconds...[2024-08-11 12:47:30.379304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.814 [2024-08-11 12:47:30.379462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.814 [2024-08-11 12:47:30.379856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.814 [2024-08-11 12:47:30.379980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.189 00:05:40.189 lcore 0: 201385 00:05:40.189 lcore 1: 201385 00:05:40.189 lcore 2: 201385 00:05:40.189 lcore 3: 201386 00:05:40.189 done. 00:05:40.189 00:05:40.189 real 0m1.297s 00:05:40.189 user 0m4.083s 00:05:40.189 sys 0m0.089s 00:05:40.189 12:47:31 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.189 12:47:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.189 ************************************ 00:05:40.189 END TEST event_perf 00:05:40.189 ************************************ 00:05:40.189 12:47:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:40.189 12:47:31 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:40.189 12:47:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.190 12:47:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.190 ************************************ 00:05:40.190 START TEST event_reactor 00:05:40.190 ************************************ 00:05:40.190 12:47:31 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:40.190 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:40.190 [2024-08-11 12:47:31.536069] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:40.190 [2024-08-11 12:47:31.536266] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69812 ] 00:05:40.190 [2024-08-11 12:47:31.682595] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.190 [2024-08-11 12:47:31.714777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.564 test_start 00:05:41.564 oneshot 00:05:41.564 tick 100 00:05:41.564 tick 100 00:05:41.564 tick 250 00:05:41.564 tick 100 00:05:41.564 tick 100 00:05:41.564 tick 100 00:05:41.564 tick 250 00:05:41.564 tick 500 00:05:41.564 tick 100 00:05:41.564 tick 100 00:05:41.564 tick 250 00:05:41.564 tick 100 00:05:41.564 tick 100 00:05:41.564 test_end 00:05:41.564 00:05:41.564 real 0m1.281s 00:05:41.564 user 0m1.105s 00:05:41.564 sys 0m0.067s 00:05:41.564 12:47:32 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.564 12:47:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:41.564 ************************************ 00:05:41.564 END TEST event_reactor 00:05:41.564 ************************************ 00:05:41.564 12:47:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.564 12:47:32 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:41.564 12:47:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.564 12:47:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.564 ************************************ 00:05:41.564 START TEST event_reactor_perf 00:05:41.564 ************************************ 00:05:41.564 12:47:32 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.564 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:41.564 [2024-08-11 12:47:32.870711] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:41.564 [2024-08-11 12:47:32.870906] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69843 ] 00:05:41.564 [2024-08-11 12:47:33.015321] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.564 [2024-08-11 12:47:33.048015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.945 test_start 00:05:42.945 test_end 00:05:42.945 Performance: 336074 events per second 00:05:42.945 00:05:42.945 real 0m1.279s 00:05:42.945 user 0m1.103s 00:05:42.945 sys 0m0.067s 00:05:42.945 12:47:34 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.945 ************************************ 00:05:42.945 END TEST event_reactor_perf 00:05:42.945 ************************************ 00:05:42.945 12:47:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.945 12:47:34 event -- event/event.sh@49 -- # uname -s 00:05:42.945 12:47:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:42.945 12:47:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:42.945 12:47:34 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.945 12:47:34 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.945 12:47:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.945 ************************************ 00:05:42.945 START TEST event_scheduler 00:05:42.945 ************************************ 00:05:42.945 12:47:34 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:42.945 * Looking for test storage... 00:05:42.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:42.945 12:47:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:42.945 12:47:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=69904 00:05:42.945 12:47:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.945 12:47:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:42.945 12:47:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 69904 00:05:42.945 12:47:34 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 69904 ']' 00:05:42.945 12:47:34 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.945 12:47:34 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.945 12:47:34 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.945 12:47:34 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.945 12:47:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.945 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:42.945 [2024-08-11 12:47:34.348420] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:42.945 [2024-08-11 12:47:34.348638] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69904 ] 00:05:42.945 [2024-08-11 12:47:34.515647] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.203 [2024-08-11 12:47:34.558858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.203 [2024-08-11 12:47:34.559019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.203 [2024-08-11 12:47:34.559465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.203 [2024-08-11 12:47:34.559518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.769 12:47:35 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.769 12:47:35 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:43.769 12:47:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:43.769 12:47:35 event.event_scheduler -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:43.769 12:47:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.769 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.769 POWER: Cannot set governor of lcore 0 to userspace 00:05:43.769 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.769 POWER: Cannot set governor of lcore 0 to performance 00:05:43.769 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.769 POWER: Cannot set governor of lcore 0 to userspace 00:05:43.770 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:43.770 POWER: Unable to set Power Management Environment for lcore 0 00:05:43.770 [2024-08-11 12:47:35.365338] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:43.770 [2024-08-11 12:47:35.365375] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:43.770 [2024-08-11 12:47:35.365401] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:43.770 [2024-08-11 12:47:35.365439] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:43.770 [2024-08-11 12:47:35.365454] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:43.770 [2024-08-11 12:47:35.365466] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:44.029 12:47:35 event.event_scheduler -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:44.029 12:47:35 event.event_scheduler -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 [2024-08-11 12:47:35.423224] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:44.029 12:47:35 event.event_scheduler -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:44.029 12:47:35 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.029 12:47:35 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 ************************************ 00:05:44.029 START TEST scheduler_create_thread 00:05:44.029 ************************************ 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 2 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 3 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 4 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 5 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 6 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 7 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 8 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 9 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 10 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:44.029 12:47:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.419 12:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:45.419 12:47:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:45.419 12:47:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:45.419 12:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:45.419 12:47:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.808 12:47:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:46.808 00:05:46.808 real 0m2.612s 00:05:46.808 user 0m0.023s 00:05:46.808 sys 0m0.003s 00:05:46.808 12:47:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.808 12:47:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.808 ************************************ 00:05:46.808 END TEST scheduler_create_thread 00:05:46.808 ************************************ 00:05:46.808 12:47:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.808 12:47:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 69904 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 69904 ']' 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 69904 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69904 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:46.808 killing process with pid 69904 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69904' 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 69904 00:05:46.808 12:47:38 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 69904 00:05:47.067 [2024-08-11 12:47:38.527242] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:47.326 00:05:47.326 real 0m4.555s 00:05:47.326 user 0m8.819s 00:05:47.326 sys 0m0.375s 00:05:47.326 12:47:38 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.326 ************************************ 00:05:47.326 END TEST event_scheduler 00:05:47.326 ************************************ 00:05:47.326 12:47:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.326 12:47:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.326 12:47:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.326 12:47:38 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.326 12:47:38 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.326 12:47:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.326 ************************************ 00:05:47.326 START TEST app_repeat 00:05:47.326 ************************************ 00:05:47.326 12:47:38 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70006 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.326 Process app_repeat pid: 70006 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70006' 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.326 spdk_app_start Round 0 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70006 /var/tmp/spdk-nbd.sock 00:05:47.326 12:47:38 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 70006 ']' 00:05:47.326 12:47:38 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.326 12:47:38 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.326 12:47:38 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.326 12:47:38 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.326 12:47:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.326 12:47:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.326 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:47.326 [2024-08-11 12:47:38.829575] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:47.326 [2024-08-11 12:47:38.829727] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70006 ] 00:05:47.585 [2024-08-11 12:47:38.968958] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.585 [2024-08-11 12:47:39.002943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.585 [2024-08-11 12:47:39.003022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.585 12:47:39 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.585 12:47:39 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:47.585 12:47:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.844 Malloc0 00:05:47.844 12:47:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.103 Malloc1 00:05:48.103 12:47:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.103 12:47:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.362 /dev/nbd0 00:05:48.362 12:47:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.362 12:47:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.362 1+0 records in 00:05:48.362 1+0 records out 00:05:48.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398679 s, 10.3 MB/s 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:48.362 12:47:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:48.362 12:47:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.363 12:47:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.363 12:47:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.621 /dev/nbd1 00:05:48.880 12:47:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.880 12:47:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.880 1+0 records in 00:05:48.880 1+0 records out 00:05:48.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334876 s, 12.2 MB/s 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:48.880 12:47:40 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:48.880 12:47:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.880 12:47:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.880 12:47:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.880 12:47:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.880 12:47:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.139 { 00:05:49.139 "nbd_device": "/dev/nbd0", 00:05:49.139 "bdev_name": "Malloc0" 00:05:49.139 }, 00:05:49.139 { 00:05:49.139 "nbd_device": "/dev/nbd1", 00:05:49.139 "bdev_name": "Malloc1" 00:05:49.139 } 00:05:49.139 ]' 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.139 { 00:05:49.139 "nbd_device": "/dev/nbd0", 00:05:49.139 "bdev_name": "Malloc0" 00:05:49.139 }, 00:05:49.139 { 00:05:49.139 "nbd_device": "/dev/nbd1", 00:05:49.139 "bdev_name": "Malloc1" 00:05:49.139 } 00:05:49.139 ]' 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.139 /dev/nbd1' 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.139 /dev/nbd1' 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.139 12:47:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.140 256+0 records in 00:05:49.140 256+0 records out 00:05:49.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00869707 s, 121 MB/s 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.140 256+0 records in 00:05:49.140 256+0 records out 00:05:49.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256784 s, 40.8 MB/s 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.140 256+0 records in 00:05:49.140 256+0 records out 00:05:49.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0346856 s, 30.2 MB/s 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.140 12:47:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.399 12:47:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.657 12:47:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.916 12:47:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.916 12:47:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.916 12:47:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.916 12:47:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.175 12:47:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.175 12:47:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.175 12:47:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.175 12:47:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.175 12:47:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.175 12:47:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.175 12:47:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.175 12:47:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.175 12:47:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.432 12:47:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.432 [2024-08-11 12:47:41.887379] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.432 [2024-08-11 12:47:41.918813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.432 [2024-08-11 12:47:41.918818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.432 [2024-08-11 12:47:41.947553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.432 [2024-08-11 12:47:41.947633] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.718 12:47:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.718 spdk_app_start Round 1 00:05:53.718 12:47:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.718 12:47:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70006 /var/tmp/spdk-nbd.sock 00:05:53.718 12:47:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 70006 ']' 00:05:53.718 12:47:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.718 12:47:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:53.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.718 12:47:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.718 12:47:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:53.718 12:47:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.718 12:47:45 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.718 12:47:45 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:53.718 12:47:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.718 Malloc0 00:05:53.977 12:47:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.977 Malloc1 00:05:53.977 12:47:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.977 12:47:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.237 /dev/nbd0 00:05:54.237 12:47:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.237 12:47:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.237 1+0 records in 00:05:54.237 1+0 records out 00:05:54.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231847 s, 17.7 MB/s 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:54.237 12:47:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:54.237 12:47:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.237 12:47:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.237 12:47:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.805 /dev/nbd1 00:05:54.805 12:47:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.805 12:47:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.805 1+0 records in 00:05:54.805 1+0 records out 00:05:54.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303015 s, 13.5 MB/s 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:54.805 12:47:46 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:54.805 12:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.805 12:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.805 12:47:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.805 12:47:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.805 12:47:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.064 { 00:05:55.064 "nbd_device": "/dev/nbd0", 00:05:55.064 "bdev_name": "Malloc0" 00:05:55.064 }, 00:05:55.064 { 00:05:55.064 "nbd_device": "/dev/nbd1", 00:05:55.064 "bdev_name": "Malloc1" 00:05:55.064 } 00:05:55.064 ]' 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.064 { 00:05:55.064 "nbd_device": "/dev/nbd0", 00:05:55.064 "bdev_name": "Malloc0" 00:05:55.064 }, 00:05:55.064 { 00:05:55.064 "nbd_device": "/dev/nbd1", 00:05:55.064 "bdev_name": "Malloc1" 00:05:55.064 } 00:05:55.064 ]' 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.064 /dev/nbd1' 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.064 /dev/nbd1' 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.064 256+0 records in 00:05:55.064 256+0 records out 00:05:55.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106891 s, 98.1 MB/s 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.064 256+0 records in 00:05:55.064 256+0 records out 00:05:55.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221998 s, 47.2 MB/s 00:05:55.064 12:47:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.065 256+0 records in 00:05:55.065 256+0 records out 00:05:55.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0416885 s, 25.2 MB/s 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.065 12:47:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.324 12:47:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.583 12:47:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.842 12:47:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.842 12:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.842 12:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.101 12:47:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.101 12:47:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.360 12:47:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.360 [2024-08-11 12:47:47.843544] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.360 [2024-08-11 12:47:47.875896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.360 [2024-08-11 12:47:47.875941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.360 [2024-08-11 12:47:47.904580] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.360 [2024-08-11 12:47:47.904658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.647 12:47:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.647 spdk_app_start Round 2 00:05:59.647 12:47:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.647 12:47:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70006 /var/tmp/spdk-nbd.sock 00:05:59.647 12:47:50 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 70006 ']' 00:05:59.647 12:47:50 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.647 12:47:50 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.647 12:47:50 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.647 12:47:50 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.647 12:47:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.647 12:47:50 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.647 12:47:50 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:59.647 12:47:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.905 Malloc0 00:05:59.905 12:47:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.905 Malloc1 00:06:00.164 12:47:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.164 /dev/nbd0 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.164 12:47:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.164 12:47:51 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:00.164 12:47:51 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:00.164 12:47:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:00.164 12:47:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:00.164 12:47:51 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.422 1+0 records in 00:06:00.422 1+0 records out 00:06:00.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300689 s, 13.6 MB/s 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:00.422 12:47:51 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:00.422 12:47:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.422 12:47:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.422 12:47:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.681 /dev/nbd1 00:06:00.681 12:47:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.681 12:47:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.681 1+0 records in 00:06:00.681 1+0 records out 00:06:00.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374514 s, 10.9 MB/s 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:00.681 12:47:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:00.681 12:47:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.681 12:47:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.681 12:47:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.681 12:47:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.681 12:47:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.941 { 00:06:00.941 "nbd_device": "/dev/nbd0", 00:06:00.941 "bdev_name": "Malloc0" 00:06:00.941 }, 00:06:00.941 { 00:06:00.941 "nbd_device": "/dev/nbd1", 00:06:00.941 "bdev_name": "Malloc1" 00:06:00.941 } 00:06:00.941 ]' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.941 { 00:06:00.941 "nbd_device": "/dev/nbd0", 00:06:00.941 "bdev_name": "Malloc0" 00:06:00.941 }, 00:06:00.941 { 00:06:00.941 "nbd_device": "/dev/nbd1", 00:06:00.941 "bdev_name": "Malloc1" 00:06:00.941 } 00:06:00.941 ]' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.941 /dev/nbd1' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.941 /dev/nbd1' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.941 256+0 records in 00:06:00.941 256+0 records out 00:06:00.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0096986 s, 108 MB/s 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.941 256+0 records in 00:06:00.941 256+0 records out 00:06:00.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019353 s, 54.2 MB/s 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.941 256+0 records in 00:06:00.941 256+0 records out 00:06:00.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332411 s, 31.5 MB/s 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.941 12:47:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.200 12:47:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.200 12:47:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.200 12:47:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.200 12:47:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.200 12:47:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.200 12:47:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.458 12:47:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.458 12:47:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.458 12:47:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.458 12:47:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.458 12:47:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.025 12:47:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.025 12:47:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.284 12:47:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.284 [2024-08-11 12:47:53.758328] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.284 [2024-08-11 12:47:53.792970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.284 [2024-08-11 12:47:53.792980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.284 [2024-08-11 12:47:53.824786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.284 [2024-08-11 12:47:53.824869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.572 12:47:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70006 /var/tmp/spdk-nbd.sock 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 70006 ']' 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:05.572 12:47:56 event.app_repeat -- event/event.sh@39 -- # killprocess 70006 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 70006 ']' 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 70006 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70006 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:05.572 killing process with pid 70006 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70006' 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@965 -- # kill 70006 00:06:05.572 12:47:56 event.app_repeat -- common/autotest_common.sh@970 -- # wait 70006 00:06:05.572 spdk_app_start is called in Round 0. 00:06:05.572 Shutdown signal received, stop current app iteration 00:06:05.572 Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 reinitialization... 00:06:05.572 spdk_app_start is called in Round 1. 00:06:05.572 Shutdown signal received, stop current app iteration 00:06:05.572 Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 reinitialization... 00:06:05.572 spdk_app_start is called in Round 2. 00:06:05.572 Shutdown signal received, stop current app iteration 00:06:05.572 Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 reinitialization... 00:06:05.572 spdk_app_start is called in Round 3. 00:06:05.572 Shutdown signal received, stop current app iteration 00:06:05.572 12:47:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:05.572 12:47:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:05.572 00:06:05.572 real 0m18.323s 00:06:05.572 user 0m41.744s 00:06:05.572 sys 0m2.581s 00:06:05.572 12:47:57 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.572 ************************************ 00:06:05.572 END TEST app_repeat 00:06:05.572 ************************************ 00:06:05.572 12:47:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.572 12:47:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:05.572 12:47:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:05.572 12:47:57 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.572 12:47:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.572 12:47:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.572 ************************************ 00:06:05.572 START TEST cpu_locks 00:06:05.572 ************************************ 00:06:05.572 12:47:57 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:05.832 * Looking for test storage... 00:06:05.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:05.832 12:47:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:05.832 12:47:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:05.832 12:47:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:05.832 12:47:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:05.832 12:47:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.832 12:47:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.832 12:47:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.832 ************************************ 00:06:05.832 START TEST default_locks 00:06:05.832 ************************************ 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70432 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70432 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 70432 ']' 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.832 12:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.832 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:05.832 [2024-08-11 12:47:57.341605] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:05.832 [2024-08-11 12:47:57.341743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70432 ] 00:06:06.091 [2024-08-11 12:47:57.477058] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.091 [2024-08-11 12:47:57.513447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70432 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70432 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70432 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 70432 ']' 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 70432 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70432 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:07.028 killing process with pid 70432 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70432' 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 70432 00:06:07.028 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 70432 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70432 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@646 -- # local es=0 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 70432 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # waitforlisten 70432 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 70432 ']' 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.596 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (70432) - No such process 00:06:07.596 ERROR: process (pid: 70432) is no longer running 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.596 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # es=1 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.597 00:06:07.597 real 0m1.669s 00:06:07.597 user 0m1.873s 00:06:07.597 sys 0m0.431s 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.597 12:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.597 ************************************ 00:06:07.597 END TEST default_locks 00:06:07.597 ************************************ 00:06:07.597 12:47:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.597 12:47:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.597 12:47:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.597 12:47:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.597 ************************************ 00:06:07.597 START TEST default_locks_via_rpc 00:06:07.597 ************************************ 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70474 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70474 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70474 ']' 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.597 12:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.597 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:07.597 [2024-08-11 12:47:59.087897] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:07.597 [2024-08-11 12:47:59.088083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70474 ] 00:06:07.856 [2024-08-11 12:47:59.236233] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.856 [2024-08-11 12:47:59.271791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70474 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70474 00:06:08.792 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70474 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 70474 ']' 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 70474 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70474 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.052 killing process with pid 70474 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70474' 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 70474 00:06:09.052 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 70474 00:06:09.311 00:06:09.311 real 0m1.860s 00:06:09.311 user 0m2.061s 00:06:09.311 sys 0m0.554s 00:06:09.311 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.311 12:48:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.311 ************************************ 00:06:09.311 END TEST default_locks_via_rpc 00:06:09.311 ************************************ 00:06:09.311 12:48:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.311 12:48:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.311 12:48:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.311 12:48:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.311 ************************************ 00:06:09.311 START TEST non_locking_app_on_locked_coremask 00:06:09.311 ************************************ 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70526 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70526 /var/tmp/spdk.sock 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 70526 ']' 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.311 12:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.570 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:09.570 [2024-08-11 12:48:00.979416] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:09.570 [2024-08-11 12:48:00.979557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70526 ] 00:06:09.570 [2024-08-11 12:48:01.121469] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.570 [2024-08-11 12:48:01.156509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70540 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70540 /var/tmp/spdk2.sock 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 70540 ']' 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.829 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.830 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.830 12:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.830 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:09.830 [2024-08-11 12:48:01.421092] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:09.830 [2024-08-11 12:48:01.421281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70540 ] 00:06:10.091 [2024-08-11 12:48:01.576535] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.091 [2024-08-11 12:48:01.576597] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.092 [2024-08-11 12:48:01.652049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.039 12:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.039 12:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:11.039 12:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70526 00:06:11.039 12:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70526 00:06:11.039 12:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70526 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 70526 ']' 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 70526 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70526 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.974 killing process with pid 70526 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70526' 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 70526 00:06:11.974 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 70526 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70540 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 70540 ']' 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 70540 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70540 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.542 killing process with pid 70540 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70540' 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 70540 00:06:12.542 12:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 70540 00:06:12.801 00:06:12.801 real 0m3.274s 00:06:12.801 user 0m3.723s 00:06:12.801 sys 0m1.021s 00:06:12.801 12:48:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.801 12:48:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.801 ************************************ 00:06:12.801 END TEST non_locking_app_on_locked_coremask 00:06:12.801 ************************************ 00:06:12.801 12:48:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.801 12:48:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.801 12:48:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.801 12:48:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.801 ************************************ 00:06:12.801 START TEST locking_app_on_unlocked_coremask 00:06:12.801 ************************************ 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70604 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70604 /var/tmp/spdk.sock 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 70604 ']' 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.801 12:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.801 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:12.801 [2024-08-11 12:48:04.319839] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:12.802 [2024-08-11 12:48:04.320049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70604 ] 00:06:13.060 [2024-08-11 12:48:04.465882] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.060 [2024-08-11 12:48:04.465946] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.060 [2024-08-11 12:48:04.499116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70620 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70620 /var/tmp/spdk2.sock 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 70620 ']' 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.997 12:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.997 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:13.997 [2024-08-11 12:48:05.358830] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:13.997 [2024-08-11 12:48:05.358987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70620 ] 00:06:13.997 [2024-08-11 12:48:05.505542] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.997 [2024-08-11 12:48:05.572348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.934 12:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.934 12:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:14.934 12:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70620 00:06:14.934 12:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70620 00:06:14.934 12:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.502 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70604 00:06:15.502 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 70604 ']' 00:06:15.502 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 70604 00:06:15.502 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:15.502 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:15.502 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70604 00:06:15.760 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:15.761 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:15.761 killing process with pid 70604 00:06:15.761 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70604' 00:06:15.761 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 70604 00:06:15.761 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 70604 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70620 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 70620 ']' 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 70620 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70620 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.329 killing process with pid 70620 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70620' 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 70620 00:06:16.329 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 70620 00:06:16.588 00:06:16.588 real 0m3.771s 00:06:16.588 user 0m4.327s 00:06:16.588 sys 0m1.028s 00:06:16.588 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.588 12:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.588 ************************************ 00:06:16.588 END TEST locking_app_on_unlocked_coremask 00:06:16.589 ************************************ 00:06:16.589 12:48:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.589 12:48:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.589 12:48:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.589 12:48:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.589 ************************************ 00:06:16.589 START TEST locking_app_on_locked_coremask 00:06:16.589 ************************************ 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70683 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70683 /var/tmp/spdk.sock 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 70683 ']' 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.589 12:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.589 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:16.589 [2024-08-11 12:48:08.124150] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:16.589 [2024-08-11 12:48:08.124313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70683 ] 00:06:16.848 [2024-08-11 12:48:08.260528] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.848 [2024-08-11 12:48:08.296967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70699 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70699 /var/tmp/spdk2.sock 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@646 -- # local es=0 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 70699 /var/tmp/spdk2.sock 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # waitforlisten 70699 /var/tmp/spdk2.sock 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 70699 ']' 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.784 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.784 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:17.784 [2024-08-11 12:48:09.196939] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:17.784 [2024-08-11 12:48:09.197158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:06:17.784 [2024-08-11 12:48:09.357635] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70683 has claimed it. 00:06:17.784 [2024-08-11 12:48:09.357731] app.c: 903:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.352 ERROR: process (pid: 70699) is no longer running 00:06:18.352 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (70699) - No such process 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # es=1 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70683 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70683 00:06:18.352 12:48:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70683 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 70683 ']' 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 70683 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70683 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.612 killing process with pid 70683 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70683' 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 70683 00:06:18.612 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 70683 00:06:18.871 00:06:18.871 real 0m2.404s 00:06:18.871 user 0m2.842s 00:06:18.871 sys 0m0.585s 00:06:18.871 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.871 12:48:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.871 ************************************ 00:06:18.871 END TEST locking_app_on_locked_coremask 00:06:18.871 ************************************ 00:06:19.130 12:48:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.130 12:48:10 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.130 12:48:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.130 12:48:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.130 ************************************ 00:06:19.130 START TEST locking_overlapped_coremask 00:06:19.130 ************************************ 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70747 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70747 /var/tmp/spdk.sock 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 70747 ']' 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.130 12:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.130 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:19.130 [2024-08-11 12:48:10.607412] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:19.130 [2024-08-11 12:48:10.607611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70747 ] 00:06:19.389 [2024-08-11 12:48:10.756507] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.389 [2024-08-11 12:48:10.792449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.389 [2024-08-11 12:48:10.792531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.389 [2024-08-11 12:48:10.792614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70765 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70765 /var/tmp/spdk2.sock 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@646 -- # local es=0 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 70765 /var/tmp/spdk2.sock 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # waitforlisten 70765 /var/tmp/spdk2.sock 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 70765 ']' 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.327 12:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.327 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:20.327 [2024-08-11 12:48:11.733776] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:20.327 [2024-08-11 12:48:11.734093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70765 ] 00:06:20.327 [2024-08-11 12:48:11.890492] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70747 has claimed it. 00:06:20.327 [2024-08-11 12:48:11.893932] app.c: 903:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.893 ERROR: process (pid: 70765) is no longer running 00:06:20.894 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (70765) - No such process 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # es=1 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70747 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 70747 ']' 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 70747 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70747 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.894 killing process with pid 70747 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70747' 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 70747 00:06:20.894 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 70747 00:06:21.153 00:06:21.153 real 0m2.244s 00:06:21.153 user 0m6.415s 00:06:21.153 sys 0m0.463s 00:06:21.153 ************************************ 00:06:21.153 END TEST locking_overlapped_coremask 00:06:21.153 ************************************ 00:06:21.153 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.153 12:48:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.412 12:48:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.412 12:48:12 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.412 12:48:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.412 12:48:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.412 ************************************ 00:06:21.412 START TEST locking_overlapped_coremask_via_rpc 00:06:21.412 ************************************ 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70812 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70812 /var/tmp/spdk.sock 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70812 ']' 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.412 12:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.412 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:21.412 [2024-08-11 12:48:12.879941] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:21.412 [2024-08-11 12:48:12.880591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70812 ] 00:06:21.671 [2024-08-11 12:48:13.020933] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.671 [2024-08-11 12:48:13.021041] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.671 [2024-08-11 12:48:13.060103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.671 [2024-08-11 12:48:13.060213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.671 [2024-08-11 12:48:13.060434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70817 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70817 /var/tmp/spdk2.sock 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70817 ']' 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.671 12:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.930 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:21.930 [2024-08-11 12:48:13.345454] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:21.930 [2024-08-11 12:48:13.345648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:06:21.930 [2024-08-11 12:48:13.503561] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.930 [2024-08-11 12:48:13.503644] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.187 [2024-08-11 12:48:13.583128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.187 [2024-08-11 12:48:13.587089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.188 [2024-08-11 12:48:13.587170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@646 -- # local es=0 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:22.753 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.753 [2024-08-11 12:48:14.337259] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70812 has claimed it. 00:06:22.753 request: 00:06:22.753 { 00:06:22.753 "method": "framework_enable_cpumask_locks", 00:06:22.753 "req_id": 1 00:06:22.753 } 00:06:22.753 Got JSON-RPC error response 00:06:22.753 response: 00:06:22.753 { 00:06:22.753 "code": -32603, 00:06:22.754 "message": "Failed to claim CPU core: 2" 00:06:22.754 } 00:06:22.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # es=1 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70812 /var/tmp/spdk.sock 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70812 ']' 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.754 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70817 /var/tmp/spdk2.sock 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 70817 ']' 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.319 ************************************ 00:06:23.319 END TEST locking_overlapped_coremask_via_rpc 00:06:23.319 ************************************ 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.319 00:06:23.319 real 0m2.113s 00:06:23.319 user 0m1.266s 00:06:23.319 sys 0m0.174s 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.319 12:48:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.577 12:48:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.577 12:48:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70812 ]] 00:06:23.577 12:48:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70812 00:06:23.577 12:48:14 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 70812 ']' 00:06:23.577 12:48:14 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 70812 00:06:23.577 12:48:14 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:23.577 12:48:14 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.577 12:48:14 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70812 00:06:23.577 killing process with pid 70812 00:06:23.577 12:48:14 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.578 12:48:14 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.578 12:48:14 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70812' 00:06:23.578 12:48:14 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 70812 00:06:23.578 12:48:14 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 70812 00:06:23.836 12:48:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70817 ]] 00:06:23.836 12:48:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70817 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 70817 ']' 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 70817 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70817 00:06:23.836 killing process with pid 70817 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70817' 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 70817 00:06:23.836 12:48:15 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 70817 00:06:24.094 12:48:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.094 12:48:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.094 12:48:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70812 ]] 00:06:24.094 12:48:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70812 00:06:24.094 12:48:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 70812 ']' 00:06:24.094 12:48:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 70812 00:06:24.094 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (70812) - No such process 00:06:24.094 Process with pid 70812 is not found 00:06:24.094 12:48:15 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 70812 is not found' 00:06:24.094 12:48:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70817 ]] 00:06:24.094 12:48:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70817 00:06:24.094 12:48:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 70817 ']' 00:06:24.094 12:48:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 70817 00:06:24.094 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (70817) - No such process 00:06:24.094 Process with pid 70817 is not found 00:06:24.094 12:48:15 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 70817 is not found' 00:06:24.094 12:48:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.094 00:06:24.094 real 0m18.463s 00:06:24.094 user 0m33.052s 00:06:24.094 sys 0m5.069s 00:06:24.094 12:48:15 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.094 12:48:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 ************************************ 00:06:24.094 END TEST cpu_locks 00:06:24.094 ************************************ 00:06:24.094 00:06:24.094 real 0m45.602s 00:06:24.094 user 1m30.025s 00:06:24.094 sys 0m8.498s 00:06:24.094 12:48:15 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.094 12:48:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.094 ************************************ 00:06:24.094 END TEST event 00:06:24.094 ************************************ 00:06:24.355 12:48:15 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.355 12:48:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.355 12:48:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.355 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:06:24.355 ************************************ 00:06:24.355 START TEST thread 00:06:24.355 ************************************ 00:06:24.355 12:48:15 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.355 * Looking for test storage... 00:06:24.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:24.355 12:48:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.355 12:48:15 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:24.355 12:48:15 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.355 12:48:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.355 ************************************ 00:06:24.355 START TEST thread_poller_perf 00:06:24.355 ************************************ 00:06:24.355 12:48:15 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.355 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:24.355 [2024-08-11 12:48:15.871889] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:24.355 [2024-08-11 12:48:15.872139] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70944 ] 00:06:24.614 [2024-08-11 12:48:16.015374] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.614 [2024-08-11 12:48:16.051294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.614 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:25.621 ====================================== 00:06:25.621 busy:2210244441 (cyc) 00:06:25.621 total_run_count: 350000 00:06:25.621 tsc_hz: 2200000000 (cyc) 00:06:25.621 ====================================== 00:06:25.621 poller_cost: 6314 (cyc), 2870 (nsec) 00:06:25.621 00:06:25.621 real 0m1.304s 00:06:25.621 user 0m1.116s 00:06:25.621 sys 0m0.067s 00:06:25.621 12:48:17 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.621 12:48:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.621 ************************************ 00:06:25.621 END TEST thread_poller_perf 00:06:25.621 ************************************ 00:06:25.622 12:48:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.622 12:48:17 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:25.622 12:48:17 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.622 12:48:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.622 ************************************ 00:06:25.622 START TEST thread_poller_perf 00:06:25.622 ************************************ 00:06:25.622 12:48:17 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.622 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:25.880 [2024-08-11 12:48:17.223554] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:25.880 [2024-08-11 12:48:17.223773] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70980 ] 00:06:25.880 [2024-08-11 12:48:17.367447] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.880 [2024-08-11 12:48:17.399827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.880 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.255 ====================================== 00:06:27.255 busy:2203661296 (cyc) 00:06:27.255 total_run_count: 4503000 00:06:27.255 tsc_hz: 2200000000 (cyc) 00:06:27.255 ====================================== 00:06:27.255 poller_cost: 489 (cyc), 222 (nsec) 00:06:27.255 00:06:27.255 real 0m1.293s 00:06:27.255 user 0m1.111s 00:06:27.255 sys 0m0.073s 00:06:27.255 12:48:18 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.255 12:48:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.255 ************************************ 00:06:27.255 END TEST thread_poller_perf 00:06:27.255 ************************************ 00:06:27.255 12:48:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.255 00:06:27.255 real 0m2.796s 00:06:27.255 user 0m2.300s 00:06:27.255 sys 0m0.262s 00:06:27.255 12:48:18 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.255 ************************************ 00:06:27.255 END TEST thread 00:06:27.255 12:48:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.255 ************************************ 00:06:27.255 12:48:18 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:27.255 12:48:18 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.255 12:48:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.255 12:48:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.255 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:06:27.255 ************************************ 00:06:27.255 START TEST app_cmdline 00:06:27.255 ************************************ 00:06:27.255 12:48:18 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.255 * Looking for test storage... 00:06:27.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:27.255 12:48:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.255 12:48:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71056 00:06:27.255 12:48:18 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.255 12:48:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71056 00:06:27.255 12:48:18 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 71056 ']' 00:06:27.255 12:48:18 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.255 12:48:18 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.255 12:48:18 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.255 12:48:18 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.255 12:48:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.255 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:27.255 [2024-08-11 12:48:18.770255] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:27.255 [2024-08-11 12:48:18.770494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71056 ] 00:06:27.514 [2024-08-11 12:48:18.917164] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.514 [2024-08-11 12:48:18.951319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.451 12:48:19 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.451 12:48:19 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:28.451 12:48:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:28.451 { 00:06:28.451 "version": "SPDK v24.09-pre git sha1 227b8322c", 00:06:28.451 "fields": { 00:06:28.451 "major": 24, 00:06:28.451 "minor": 9, 00:06:28.451 "patch": 0, 00:06:28.451 "suffix": "-pre", 00:06:28.451 "commit": "227b8322c" 00:06:28.451 } 00:06:28.451 } 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.710 12:48:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@646 -- # local es=0 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.710 12:48:20 app_cmdline -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.970 request: 00:06:28.970 { 00:06:28.970 "method": "env_dpdk_get_mem_stats", 00:06:28.970 "req_id": 1 00:06:28.970 } 00:06:28.970 Got JSON-RPC error response 00:06:28.970 response: 00:06:28.970 { 00:06:28.970 "code": -32601, 00:06:28.970 "message": "Method not found" 00:06:28.970 } 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@649 -- # es=1 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:28.970 12:48:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71056 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 71056 ']' 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 71056 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71056 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.970 killing process with pid 71056 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71056' 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@965 -- # kill 71056 00:06:28.970 12:48:20 app_cmdline -- common/autotest_common.sh@970 -- # wait 71056 00:06:29.229 00:06:29.229 real 0m2.178s 00:06:29.229 user 0m2.857s 00:06:29.229 sys 0m0.472s 00:06:29.229 12:48:20 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.229 ************************************ 00:06:29.229 END TEST app_cmdline 00:06:29.229 ************************************ 00:06:29.229 12:48:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.229 12:48:20 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.229 12:48:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.229 12:48:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.229 12:48:20 -- common/autotest_common.sh@10 -- # set +x 00:06:29.229 ************************************ 00:06:29.229 START TEST version 00:06:29.229 ************************************ 00:06:29.229 12:48:20 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.488 * Looking for test storage... 00:06:29.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.488 12:48:20 version -- app/version.sh@17 -- # get_header_version major 00:06:29.488 12:48:20 version -- app/version.sh@14 -- # cut -f2 00:06:29.488 12:48:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.488 12:48:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.488 12:48:20 version -- app/version.sh@17 -- # major=24 00:06:29.488 12:48:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:29.488 12:48:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.488 12:48:20 version -- app/version.sh@14 -- # cut -f2 00:06:29.488 12:48:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.488 12:48:20 version -- app/version.sh@18 -- # minor=9 00:06:29.488 12:48:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:29.488 12:48:20 version -- app/version.sh@14 -- # cut -f2 00:06:29.488 12:48:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.488 12:48:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.488 12:48:20 version -- app/version.sh@19 -- # patch=0 00:06:29.488 12:48:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:29.488 12:48:20 version -- app/version.sh@14 -- # cut -f2 00:06:29.488 12:48:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.488 12:48:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.488 12:48:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:29.488 12:48:20 version -- app/version.sh@22 -- # version=24.9 00:06:29.488 12:48:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:29.488 12:48:20 version -- app/version.sh@28 -- # version=24.9rc0 00:06:29.488 12:48:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:29.488 12:48:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:29.488 12:48:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:29.488 12:48:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:29.488 00:06:29.488 real 0m0.150s 00:06:29.488 user 0m0.085s 00:06:29.488 sys 0m0.092s 00:06:29.488 12:48:20 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.488 12:48:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:29.488 ************************************ 00:06:29.488 END TEST version 00:06:29.488 ************************************ 00:06:29.488 12:48:21 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:29.488 12:48:21 -- spdk/autotest.sh@201 -- # [[ 0 -eq 1 ]] 00:06:29.488 12:48:21 -- spdk/autotest.sh@207 -- # uname -s 00:06:29.488 12:48:21 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:06:29.488 12:48:21 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:29.488 12:48:21 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:29.488 12:48:21 -- spdk/autotest.sh@220 -- # '[' 1 -eq 1 ']' 00:06:29.488 12:48:21 -- spdk/autotest.sh@221 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:29.488 12:48:21 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:29.488 12:48:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.488 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:29.488 ************************************ 00:06:29.488 START TEST blockdev_nvme 00:06:29.488 ************************************ 00:06:29.488 12:48:21 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:29.748 * Looking for test storage... 00:06:29.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:29.748 12:48:21 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71206 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71206 00:06:29.748 12:48:21 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 71206 ']' 00:06:29.748 12:48:21 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.748 12:48:21 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.748 12:48:21 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.748 12:48:21 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.748 12:48:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.748 12:48:21 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:29.748 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:29.748 [2024-08-11 12:48:21.244640] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:29.748 [2024-08-11 12:48:21.244849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71206 ] 00:06:30.007 [2024-08-11 12:48:21.392074] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.007 [2024-08-11 12:48:21.429778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.945 12:48:22 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.945 12:48:22 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:06:30.945 12:48:22 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:30.945 12:48:22 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:30.945 12:48:22 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:30.945 12:48:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:30.945 12:48:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:30.945 12:48:22 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:30.945 12:48:22 blockdev_nvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:30.945 12:48:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.205 12:48:22 blockdev_nvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:31.205 12:48:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:31.206 12:48:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "41b6aaa4-6ec6-49af-99f4-e2c47259e671"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "41b6aaa4-6ec6-49af-99f4-e2c47259e671",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "4d832b06-d808-4f95-9fa8-a0f709c3c873"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4d832b06-d808-4f95-9fa8-a0f709c3c873",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8a7947a9-5d53-41e7-af56-0e1fc788a8a7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8a7947a9-5d53-41e7-af56-0e1fc788a8a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cef009ea-c005-4a29-8706-add1d17d040b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cef009ea-c005-4a29-8706-add1d17d040b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b8152e34-2315-432e-9434-586c809b7f49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8152e34-2315-432e-9434-586c809b7f49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "255ee2f7-e444-46f8-8f71-4d5ae4a281c6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "255ee2f7-e444-46f8-8f71-4d5ae4a281c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:31.465 12:48:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:31.465 12:48:22 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:31.465 12:48:22 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:31.465 12:48:22 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71206 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 71206 ']' 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 71206 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71206 00:06:31.465 killing process with pid 71206 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71206' 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 71206 00:06:31.465 12:48:22 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 71206 00:06:31.724 12:48:23 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:31.724 12:48:23 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:31.724 12:48:23 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:31.724 12:48:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.724 12:48:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.724 ************************************ 00:06:31.724 START TEST bdev_hello_world 00:06:31.724 ************************************ 00:06:31.724 12:48:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:31.724 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:31.724 [2024-08-11 12:48:23.255914] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:31.724 [2024-08-11 12:48:23.256104] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:06:31.983 [2024-08-11 12:48:23.400607] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.983 [2024-08-11 12:48:23.437541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.242 [2024-08-11 12:48:23.806400] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:32.242 [2024-08-11 12:48:23.806462] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:32.242 [2024-08-11 12:48:23.806505] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:32.242 [2024-08-11 12:48:23.809184] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:32.242 [2024-08-11 12:48:23.809629] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:32.242 [2024-08-11 12:48:23.809698] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:32.242 [2024-08-11 12:48:23.809960] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:32.242 00:06:32.242 [2024-08-11 12:48:23.810001] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:32.501 00:06:32.501 real 0m0.842s 00:06:32.501 user 0m0.546s 00:06:32.501 sys 0m0.189s 00:06:32.501 12:48:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.501 12:48:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:32.501 ************************************ 00:06:32.501 END TEST bdev_hello_world 00:06:32.501 ************************************ 00:06:32.501 12:48:24 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:32.501 12:48:24 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:32.501 12:48:24 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.501 12:48:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.501 ************************************ 00:06:32.501 START TEST bdev_bounds 00:06:32.501 ************************************ 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71305 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.501 Process bdevio pid: 71305 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71305' 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71305 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 71305 ']' 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.501 12:48:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.502 12:48:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.502 12:48:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.502 12:48:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:32.760 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:32.760 [2024-08-11 12:48:24.158006] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:32.760 [2024-08-11 12:48:24.158204] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71305 ] 00:06:32.760 [2024-08-11 12:48:24.306263] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.760 [2024-08-11 12:48:24.343112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.760 [2024-08-11 12:48:24.343174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.760 [2024-08-11 12:48:24.343252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.696 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.696 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:06:33.696 12:48:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:33.696 I/O targets: 00:06:33.696 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:33.696 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:33.696 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:33.696 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:33.696 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:33.696 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:33.696 00:06:33.696 00:06:33.696 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.696 http://cunit.sourceforge.net/ 00:06:33.696 00:06:33.696 00:06:33.696 Suite: bdevio tests on: Nvme3n1 00:06:33.696 Test: blockdev write read block ...passed 00:06:33.696 Test: blockdev write zeroes read block ...passed 00:06:33.696 Test: blockdev write zeroes read no split ...passed 00:06:33.696 Test: blockdev write zeroes read split ...passed 00:06:33.696 Test: blockdev write zeroes read split partial ...passed 00:06:33.696 Test: blockdev reset ...[2024-08-11 12:48:25.211891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:33.696 passed 00:06:33.696 Test: blockdev write read 8 blocks ...[2024-08-11 12:48:25.214385] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.696 passed 00:06:33.696 Test: blockdev write read size > 128k ...passed 00:06:33.696 Test: blockdev write read invalid size ...passed 00:06:33.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.696 Test: blockdev write read max offset ...passed 00:06:33.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.696 Test: blockdev writev readv 8 blocks ...passed 00:06:33.696 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.696 Test: blockdev writev readv block ...passed 00:06:33.696 Test: blockdev writev readv size > 128k ...passed 00:06:33.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.696 Test: blockdev comparev and writev ...[2024-08-11 12:48:25.221126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b240e000 len:0x1000 00:06:33.696 [2024-08-11 12:48:25.221207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.696 passed 00:06:33.696 Test: blockdev nvme passthru rw ...passed 00:06:33.696 Test: blockdev nvme passthru vendor specific ...[2024-08-11 12:48:25.222132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:33.696 Test: blockdev nvme admin passthru ...passed 00:06:33.696 Test: blockdev copy ...RP2 0x0 00:06:33.696 [2024-08-11 12:48:25.222324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.696 passed 00:06:33.696 Suite: bdevio tests on: Nvme2n3 00:06:33.696 Test: blockdev write read block ...passed 00:06:33.696 Test: blockdev write zeroes read block ...passed 00:06:33.696 Test: blockdev write zeroes read no split ...passed 00:06:33.696 Test: blockdev write zeroes read split ...passed 00:06:33.696 Test: blockdev write zeroes read split partial ...passed 00:06:33.696 Test: blockdev reset ...[2024-08-11 12:48:25.236054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:33.696 passed 00:06:33.696 Test: blockdev write read 8 blocks ...[2024-08-11 12:48:25.238791] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.696 passed 00:06:33.696 Test: blockdev write read size > 128k ...passed 00:06:33.696 Test: blockdev write read invalid size ...passed 00:06:33.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.696 Test: blockdev write read max offset ...passed 00:06:33.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.696 Test: blockdev writev readv 8 blocks ...passed 00:06:33.696 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.696 Test: blockdev writev readv block ...passed 00:06:33.696 Test: blockdev writev readv size > 128k ...passed 00:06:33.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.696 Test: blockdev comparev and writev ...[2024-08-11 12:48:25.244913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:33.696 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bac09000 len:0x1000 00:06:33.696 [2024-08-11 12:48:25.245114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.696 passed 00:06:33.696 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.696 Test: blockdev nvme admin passthru ...[2024-08-11 12:48:25.245880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.696 [2024-08-11 12:48:25.245940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.696 passed 00:06:33.696 Test: blockdev copy ...passed 00:06:33.696 Suite: bdevio tests on: Nvme2n2 00:06:33.696 Test: blockdev write read block ...passed 00:06:33.696 Test: blockdev write zeroes read block ...passed 00:06:33.696 Test: blockdev write zeroes read no split ...passed 00:06:33.696 Test: blockdev write zeroes read split ...passed 00:06:33.696 Test: blockdev write zeroes read split partial ...passed 00:06:33.696 Test: blockdev reset ...[2024-08-11 12:48:25.259423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:33.696 [2024-08-11 12:48:25.262134] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.696 passed 00:06:33.696 Test: blockdev write read 8 blocks ...passed 00:06:33.696 Test: blockdev write read size > 128k ...passed 00:06:33.696 Test: blockdev write read invalid size ...passed 00:06:33.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.697 Test: blockdev write read max offset ...passed 00:06:33.697 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.697 Test: blockdev writev readv 8 blocks ...passed 00:06:33.697 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.697 Test: blockdev writev readv block ...passed 00:06:33.697 Test: blockdev writev readv size > 128k ...passed 00:06:33.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.697 Test: blockdev comparev and writev ...[2024-08-11 12:48:25.268817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2406000 len:0x1000 00:06:33.697 [2024-08-11 12:48:25.268886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.697 passed 00:06:33.697 Test: blockdev nvme passthru rw ...passed 00:06:33.697 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.697 Test: blockdev nvme admin passthru ...[2024-08-11 12:48:25.270119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.697 [2024-08-11 12:48:25.270199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.697 passed 00:06:33.697 Test: blockdev copy ...passed 00:06:33.697 Suite: bdevio tests on: Nvme2n1 00:06:33.697 Test: blockdev write read block ...passed 00:06:33.697 Test: blockdev write zeroes read block ...passed 00:06:33.956 Test: blockdev write zeroes read no split ...passed 00:06:33.956 Test: blockdev write zeroes read split ...passed 00:06:33.956 Test: blockdev write zeroes read split partial ...passed 00:06:33.956 Test: blockdev reset ...[2024-08-11 12:48:25.305779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:33.956 [2024-08-11 12:48:25.308754] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.956 passed 00:06:33.956 Test: blockdev write read 8 blocks ...passed 00:06:33.956 Test: blockdev write read size > 128k ...passed 00:06:33.956 Test: blockdev write read invalid size ...passed 00:06:33.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.956 Test: blockdev write read max offset ...passed 00:06:33.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.956 Test: blockdev writev readv 8 blocks ...passed 00:06:33.956 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.956 Test: blockdev writev readv block ...passed 00:06:33.956 Test: blockdev writev readv size > 128k ...passed 00:06:33.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.956 Test: blockdev comparev and writev ...[2024-08-11 12:48:25.316984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2402000 len:0x1000 00:06:33.956 [2024-08-11 12:48:25.317042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.956 passed 00:06:33.956 Test: blockdev nvme passthru rw ...passed 00:06:33.956 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.956 Test: blockdev nvme admin passthru ...[2024-08-11 12:48:25.317773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.956 [2024-08-11 12:48:25.317824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.956 passed 00:06:33.956 Test: blockdev copy ...passed 00:06:33.956 Suite: bdevio tests on: Nvme1n1 00:06:33.956 Test: blockdev write read block ...passed 00:06:33.956 Test: blockdev write zeroes read block ...passed 00:06:33.956 Test: blockdev write zeroes read no split ...passed 00:06:33.956 Test: blockdev write zeroes read split ...passed 00:06:33.956 Test: blockdev write zeroes read split partial ...passed 00:06:33.956 Test: blockdev reset ...[2024-08-11 12:48:25.341077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:33.956 [2024-08-11 12:48:25.343428] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.956 passed 00:06:33.956 Test: blockdev write read 8 blocks ...passed 00:06:33.956 Test: blockdev write read size > 128k ...passed 00:06:33.956 Test: blockdev write read invalid size ...passed 00:06:33.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.956 Test: blockdev write read max offset ...passed 00:06:33.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.956 Test: blockdev writev readv 8 blocks ...passed 00:06:33.956 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.956 Test: blockdev writev readv block ...passed 00:06:33.956 Test: blockdev writev readv size > 128k ...passed 00:06:33.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.956 Test: blockdev comparev and writev ...[2024-08-11 12:48:25.350865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2802000 len:0x1000 00:06:33.956 [2024-08-11 12:48:25.350948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.956 passed 00:06:33.956 Test: blockdev nvme passthru rw ...passed 00:06:33.956 Test: blockdev nvme passthru vendor specific ...[2024-08-11 12:48:25.351819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.956 passed 00:06:33.956 Test: blockdev nvme admin passthru ...[2024-08-11 12:48:25.351897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.956 passed 00:06:33.956 Test: blockdev copy ...passed 00:06:33.956 Suite: bdevio tests on: Nvme0n1 00:06:33.956 Test: blockdev write read block ...passed 00:06:33.956 Test: blockdev write zeroes read block ...passed 00:06:33.956 Test: blockdev write zeroes read no split ...passed 00:06:33.956 Test: blockdev write zeroes read split ...passed 00:06:33.956 Test: blockdev write zeroes read split partial ...passed 00:06:33.956 Test: blockdev reset ...[2024-08-11 12:48:25.376380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:33.956 [2024-08-11 12:48:25.378672] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.956 passed 00:06:33.956 Test: blockdev write read 8 blocks ...passed 00:06:33.956 Test: blockdev write read size > 128k ...passed 00:06:33.956 Test: blockdev write read invalid size ...passed 00:06:33.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.956 Test: blockdev write read max offset ...passed 00:06:33.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.956 Test: blockdev writev readv 8 blocks ...passed 00:06:33.956 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.956 Test: blockdev writev readv block ...passed 00:06:33.956 Test: blockdev writev readv size > 128k ...passed 00:06:33.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.956 Test: blockdev comparev and writev ...passed 00:06:33.956 Test: blockdev nvme passthru rw ...[2024-08-11 12:48:25.385019] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:33.956 separate metadata which is not supported yet. 00:06:33.956 passed 00:06:33.956 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.956 Test: blockdev nvme admin passthru ...[2024-08-11 12:48:25.385557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:33.956 [2024-08-11 12:48:25.385613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:33.956 passed 00:06:33.956 Test: blockdev copy ...passed 00:06:33.956 00:06:33.956 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.956 suites 6 6 n/a 0 0 00:06:33.956 tests 138 138 138 0 0 00:06:33.956 asserts 893 893 893 0 n/a 00:06:33.956 00:06:33.956 Elapsed time = 0.442 seconds 00:06:33.956 0 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71305 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 71305 ']' 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 71305 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71305 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71305' 00:06:33.956 killing process with pid 71305 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 71305 00:06:33.956 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 71305 00:06:34.215 12:48:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:34.215 00:06:34.215 real 0m1.563s 00:06:34.215 user 0m4.003s 00:06:34.215 sys 0m0.306s 00:06:34.215 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.215 12:48:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:34.215 ************************************ 00:06:34.215 END TEST bdev_bounds 00:06:34.215 ************************************ 00:06:34.215 12:48:25 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:34.215 12:48:25 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:34.215 12:48:25 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.216 12:48:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.216 ************************************ 00:06:34.216 START TEST bdev_nbd 00:06:34.216 ************************************ 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71361 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71361 /var/tmp/spdk-nbd.sock 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 71361 ']' 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.216 12:48:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:34.216 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:34.216 [2024-08-11 12:48:25.777055] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:34.216 [2024-08-11 12:48:25.777218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.475 [2024-08-11 12:48:25.926947] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.475 [2024-08-11 12:48:25.961157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.411 12:48:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.670 1+0 records in 00:06:35.670 1+0 records out 00:06:35.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387059 s, 10.6 MB/s 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.670 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.929 1+0 records in 00:06:35.929 1+0 records out 00:06:35.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055745 s, 7.3 MB/s 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.929 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:06:36.187 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.188 1+0 records in 00:06:36.188 1+0 records out 00:06:36.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646326 s, 6.3 MB/s 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.188 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.446 1+0 records in 00:06:36.446 1+0 records out 00:06:36.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000800058 s, 5.1 MB/s 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.446 12:48:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:37.013 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:37.013 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:37.013 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.014 1+0 records in 00:06:37.014 1+0 records out 00:06:37.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667602 s, 6.1 MB/s 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.014 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.272 1+0 records in 00:06:37.272 1+0 records out 00:06:37.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159687 s, 2.6 MB/s 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.272 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd0", 00:06:37.531 "bdev_name": "Nvme0n1" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd1", 00:06:37.531 "bdev_name": "Nvme1n1" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd2", 00:06:37.531 "bdev_name": "Nvme2n1" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd3", 00:06:37.531 "bdev_name": "Nvme2n2" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd4", 00:06:37.531 "bdev_name": "Nvme2n3" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd5", 00:06:37.531 "bdev_name": "Nvme3n1" 00:06:37.531 } 00:06:37.531 ]' 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd0", 00:06:37.531 "bdev_name": "Nvme0n1" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd1", 00:06:37.531 "bdev_name": "Nvme1n1" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd2", 00:06:37.531 "bdev_name": "Nvme2n1" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd3", 00:06:37.531 "bdev_name": "Nvme2n2" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd4", 00:06:37.531 "bdev_name": "Nvme2n3" 00:06:37.531 }, 00:06:37.531 { 00:06:37.531 "nbd_device": "/dev/nbd5", 00:06:37.531 "bdev_name": "Nvme3n1" 00:06:37.531 } 00:06:37.531 ]' 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.531 12:48:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.789 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.048 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.307 12:48:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.566 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.825 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.083 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:39.341 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.600 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:39.600 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.600 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.600 12:48:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:39.860 /dev/nbd0 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.860 1+0 records in 00:06:39.860 1+0 records out 00:06:39.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802635 s, 5.1 MB/s 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.860 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:40.119 /dev/nbd1 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.119 1+0 records in 00:06:40.119 1+0 records out 00:06:40.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617036 s, 6.6 MB/s 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.119 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:40.378 /dev/nbd10 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.378 1+0 records in 00:06:40.378 1+0 records out 00:06:40.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551883 s, 7.4 MB/s 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.378 12:48:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:40.637 /dev/nbd11 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.637 1+0 records in 00:06:40.637 1+0 records out 00:06:40.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791806 s, 5.2 MB/s 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.637 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:40.895 /dev/nbd12 00:06:40.895 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:40.895 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:40.895 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:06:40.895 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:40.895 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:40.895 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:40.895 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.896 1+0 records in 00:06:40.896 1+0 records out 00:06:40.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698964 s, 5.9 MB/s 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.896 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:41.154 /dev/nbd13 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.154 1+0 records in 00:06:41.154 1+0 records out 00:06:41.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763632 s, 5.4 MB/s 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.154 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.413 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd0", 00:06:41.413 "bdev_name": "Nvme0n1" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd1", 00:06:41.413 "bdev_name": "Nvme1n1" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd10", 00:06:41.413 "bdev_name": "Nvme2n1" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd11", 00:06:41.413 "bdev_name": "Nvme2n2" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd12", 00:06:41.413 "bdev_name": "Nvme2n3" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd13", 00:06:41.413 "bdev_name": "Nvme3n1" 00:06:41.413 } 00:06:41.413 ]' 00:06:41.413 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd0", 00:06:41.413 "bdev_name": "Nvme0n1" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd1", 00:06:41.413 "bdev_name": "Nvme1n1" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd10", 00:06:41.413 "bdev_name": "Nvme2n1" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd11", 00:06:41.413 "bdev_name": "Nvme2n2" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd12", 00:06:41.413 "bdev_name": "Nvme2n3" 00:06:41.413 }, 00:06:41.413 { 00:06:41.413 "nbd_device": "/dev/nbd13", 00:06:41.413 "bdev_name": "Nvme3n1" 00:06:41.413 } 00:06:41.413 ]' 00:06:41.413 12:48:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.682 /dev/nbd1 00:06:41.682 /dev/nbd10 00:06:41.682 /dev/nbd11 00:06:41.682 /dev/nbd12 00:06:41.682 /dev/nbd13' 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.682 /dev/nbd1 00:06:41.682 /dev/nbd10 00:06:41.682 /dev/nbd11 00:06:41.682 /dev/nbd12 00:06:41.682 /dev/nbd13' 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:41.682 256+0 records in 00:06:41.682 256+0 records out 00:06:41.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00654727 s, 160 MB/s 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.682 256+0 records in 00:06:41.682 256+0 records out 00:06:41.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172366 s, 6.1 MB/s 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.682 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.966 256+0 records in 00:06:41.966 256+0 records out 00:06:41.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169552 s, 6.2 MB/s 00:06:41.966 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.966 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:42.237 256+0 records in 00:06:42.237 256+0 records out 00:06:42.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174089 s, 6.0 MB/s 00:06:42.237 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.237 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:42.237 256+0 records in 00:06:42.237 256+0 records out 00:06:42.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185407 s, 5.7 MB/s 00:06:42.237 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.237 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:42.496 256+0 records in 00:06:42.496 256+0 records out 00:06:42.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160735 s, 6.5 MB/s 00:06:42.496 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.496 12:48:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:42.755 256+0 records in 00:06:42.755 256+0 records out 00:06:42.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173465 s, 6.0 MB/s 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.755 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.014 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.272 12:48:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.531 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.790 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.049 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.308 12:48:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:06:44.567 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:44.825 malloc_lvol_verify 00:06:44.826 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:45.085 31515d05-791a-4899-b3e5-e1b3a5e3c91b 00:06:45.085 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:45.653 f816f53a-85ef-4c4c-a09c-6276bc01bbbf 00:06:45.653 12:48:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:45.653 /dev/nbd0 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:06:45.653 mke2fs 1.47.0 (5-Feb-2023) 00:06:45.653 Discarding device blocks: 0/4096 done 00:06:45.653 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:45.653 00:06:45.653 Allocating group tables: 0/1 done 00:06:45.653 Writing inode tables: 0/1 done 00:06:45.653 Creating journal (1024 blocks): done 00:06:45.653 Writing superblocks and filesystem accounting information: 0/1 done 00:06:45.653 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.653 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71361 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 71361 ']' 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 71361 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.912 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71361 00:06:46.171 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.171 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.171 killing process with pid 71361 00:06:46.171 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71361' 00:06:46.171 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 71361 00:06:46.171 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 71361 00:06:46.171 12:48:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:46.171 00:06:46.171 real 0m12.050s 00:06:46.171 user 0m17.788s 00:06:46.171 sys 0m3.892s 00:06:46.171 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.171 ************************************ 00:06:46.171 END TEST bdev_nbd 00:06:46.171 ************************************ 00:06:46.171 12:48:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:46.431 12:48:37 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:46.431 12:48:37 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:46.431 skipping fio tests on NVMe due to multi-ns failures. 00:06:46.431 12:48:37 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:46.431 12:48:37 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:46.431 12:48:37 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:46.431 12:48:37 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:06:46.431 12:48:37 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.431 12:48:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.431 ************************************ 00:06:46.431 START TEST bdev_verify 00:06:46.431 ************************************ 00:06:46.431 12:48:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:46.431 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:46.431 [2024-08-11 12:48:37.865066] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:46.431 [2024-08-11 12:48:37.865254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71754 ] 00:06:46.431 [2024-08-11 12:48:38.006444] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.690 [2024-08-11 12:48:38.039852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.690 [2024-08-11 12:48:38.039973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.949 Running I/O for 5 seconds... 00:06:52.221 00:06:52.221 Latency(us) 00:06:52.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.221 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.221 Verification LBA range: start 0x0 length 0xbd0bd 00:06:52.221 Nvme0n1 : 5.04 1523.96 5.95 0.00 0.00 83650.60 16086.11 82932.83 00:06:52.221 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.221 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:52.221 Nvme0n1 : 5.04 1549.73 6.05 0.00 0.00 82261.33 16920.20 85792.58 00:06:52.221 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.221 Verification LBA range: start 0x0 length 0xa0000 00:06:52.221 Nvme1n1 : 5.07 1527.67 5.97 0.00 0.00 83211.08 8102.63 72923.69 00:06:52.221 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.221 Verification LBA range: start 0xa0000 length 0xa0000 00:06:52.221 Nvme1n1 : 5.07 1553.71 6.07 0.00 0.00 81825.40 7983.48 76260.07 00:06:52.221 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.221 Verification LBA range: start 0x0 length 0x80000 00:06:52.221 Nvme2n1 : 5.08 1536.43 6.00 0.00 0.00 82754.29 9234.62 70540.57 00:06:52.221 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.221 Verification LBA range: start 0x80000 length 0x80000 00:06:52.222 Nvme2n1 : 5.08 1561.93 6.10 0.00 0.00 81399.61 10724.07 69110.69 00:06:52.222 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.222 Verification LBA range: start 0x0 length 0x80000 00:06:52.222 Nvme2n2 : 5.08 1535.80 6.00 0.00 0.00 82617.23 9592.09 69587.32 00:06:52.222 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.222 Verification LBA range: start 0x80000 length 0x80000 00:06:52.222 Nvme2n2 : 5.08 1561.50 6.10 0.00 0.00 81194.03 10962.39 68634.07 00:06:52.222 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.222 Verification LBA range: start 0x0 length 0x80000 00:06:52.222 Nvme2n3 : 5.09 1535.16 6.00 0.00 0.00 82464.98 10068.71 71970.44 00:06:52.222 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.222 Verification LBA range: start 0x80000 length 0x80000 00:06:52.222 Nvme2n3 : 5.08 1560.85 6.10 0.00 0.00 81024.89 11200.70 71493.82 00:06:52.222 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.222 Verification LBA range: start 0x0 length 0x20000 00:06:52.222 Nvme3n1 : 5.09 1534.58 5.99 0.00 0.00 82287.72 10664.49 74830.20 00:06:52.222 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.222 Verification LBA range: start 0x20000 length 0x20000 00:06:52.222 Nvme3n1 : 5.09 1560.22 6.09 0.00 0.00 80867.81 11141.12 74353.57 00:06:52.222 =================================================================================================================== 00:06:52.222 Total : 18541.56 72.43 0.00 0.00 82121.42 7983.48 85792.58 00:06:52.480 00:06:52.480 real 0m6.212s 00:06:52.480 user 0m11.638s 00:06:52.480 sys 0m0.209s 00:06:52.480 12:48:43 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.480 ************************************ 00:06:52.480 END TEST bdev_verify 00:06:52.480 ************************************ 00:06:52.480 12:48:43 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:52.480 12:48:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:52.480 12:48:44 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:06:52.480 12:48:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.480 12:48:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.480 ************************************ 00:06:52.480 START TEST bdev_verify_big_io 00:06:52.480 ************************************ 00:06:52.480 12:48:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:52.739 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:52.739 [2024-08-11 12:48:44.144998] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:52.739 [2024-08-11 12:48:44.145187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71841 ] 00:06:52.739 [2024-08-11 12:48:44.291077] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.739 [2024-08-11 12:48:44.324041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.739 [2024-08-11 12:48:44.324067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.307 Running I/O for 5 seconds... 00:06:59.874 00:06:59.874 Latency(us) 00:06:59.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.874 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x0 length 0xbd0b 00:06:59.874 Nvme0n1 : 5.71 129.02 8.06 0.00 0.00 932760.06 16324.42 1014258.97 00:06:59.874 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:59.874 Nvme0n1 : 5.73 128.53 8.03 0.00 0.00 954109.01 25618.62 999006.95 00:06:59.874 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x0 length 0xa000 00:06:59.874 Nvme1n1 : 5.77 132.57 8.29 0.00 0.00 897784.48 102474.47 842673.80 00:06:59.874 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0xa000 length 0xa000 00:06:59.874 Nvme1n1 : 5.74 129.53 8.10 0.00 0.00 923979.82 77689.95 846486.81 00:06:59.874 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x0 length 0x8000 00:06:59.874 Nvme2n1 : 5.77 130.09 8.13 0.00 0.00 895587.16 55526.87 1525201.45 00:06:59.874 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x8000 length 0x8000 00:06:59.874 Nvme2n1 : 5.74 133.83 8.36 0.00 0.00 878959.40 82456.20 793104.76 00:06:59.874 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x0 length 0x8000 00:06:59.874 Nvme2n2 : 5.85 134.35 8.40 0.00 0.00 839940.48 36938.47 1555705.48 00:06:59.874 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x8000 length 0x8000 00:06:59.874 Nvme2n2 : 5.78 136.96 8.56 0.00 0.00 833367.71 34078.72 823608.79 00:06:59.874 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x0 length 0x8000 00:06:59.874 Nvme2n3 : 5.90 139.18 8.70 0.00 0.00 786379.61 39321.60 1570957.50 00:06:59.874 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x8000 length 0x8000 00:06:59.874 Nvme2n3 : 5.81 143.14 8.95 0.00 0.00 776285.66 33125.47 892242.85 00:06:59.874 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x0 length 0x2000 00:06:59.874 Nvme3n1 : 5.92 159.00 9.94 0.00 0.00 674475.24 685.15 1609087.53 00:06:59.874 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.874 Verification LBA range: start 0x2000 length 0x2000 00:06:59.874 Nvme3n1 : 5.90 162.61 10.16 0.00 0.00 665806.53 1213.91 918933.88 00:06:59.874 =================================================================================================================== 00:06:59.874 Total : 1658.82 103.68 0.00 0.00 830164.62 685.15 1609087.53 00:06:59.874 00:06:59.874 real 0m7.274s 00:06:59.874 user 0m13.726s 00:06:59.874 sys 0m0.230s 00:06:59.874 12:48:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.874 ************************************ 00:06:59.874 END TEST bdev_verify_big_io 00:06:59.874 ************************************ 00:06:59.874 12:48:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:59.874 12:48:51 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.874 12:48:51 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:59.874 12:48:51 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.874 12:48:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.874 ************************************ 00:06:59.874 START TEST bdev_write_zeroes 00:06:59.874 ************************************ 00:06:59.874 12:48:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.874 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:59.874 [2024-08-11 12:48:51.449813] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:59.874 [2024-08-11 12:48:51.449974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71945 ] 00:07:00.133 [2024-08-11 12:48:51.590444] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.133 [2024-08-11 12:48:51.623654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.701 Running I/O for 1 seconds... 00:07:01.635 00:07:01.635 Latency(us) 00:07:01.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.635 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:01.635 Nvme0n1 : 1.02 9452.81 36.93 0.00 0.00 13508.03 6970.65 26095.24 00:07:01.635 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:01.635 Nvme1n1 : 1.02 9438.30 36.87 0.00 0.00 13505.27 10485.76 19660.80 00:07:01.635 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:01.635 Nvme2n1 : 1.02 9424.05 36.81 0.00 0.00 13488.47 10187.87 17992.61 00:07:01.635 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:01.635 Nvme2n2 : 1.02 9409.88 36.76 0.00 0.00 13447.53 7923.90 17515.99 00:07:01.635 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:01.635 Nvme2n3 : 1.02 9395.96 36.70 0.00 0.00 13443.21 7626.01 17158.52 00:07:01.635 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:01.635 Nvme3n1 : 1.02 9381.70 36.65 0.00 0.00 13437.81 7357.91 17515.99 00:07:01.635 =================================================================================================================== 00:07:01.635 Total : 56502.70 220.71 0.00 0.00 13471.72 6970.65 26095.24 00:07:01.893 00:07:01.893 real 0m1.928s 00:07:01.893 user 0m1.641s 00:07:01.893 sys 0m0.171s 00:07:01.893 12:48:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.893 ************************************ 00:07:01.893 END TEST bdev_write_zeroes 00:07:01.893 ************************************ 00:07:01.893 12:48:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:01.893 12:48:53 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.893 12:48:53 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:01.893 12:48:53 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.893 12:48:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.893 ************************************ 00:07:01.893 START TEST bdev_json_nonenclosed 00:07:01.893 ************************************ 00:07:01.893 12:48:53 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.893 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:01.893 [2024-08-11 12:48:53.445358] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:01.893 [2024-08-11 12:48:53.445533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71987 ] 00:07:02.151 [2024-08-11 12:48:53.595565] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.151 [2024-08-11 12:48:53.642094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.151 [2024-08-11 12:48:53.642224] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:02.151 [2024-08-11 12:48:53.642274] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:02.151 [2024-08-11 12:48:53.642294] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.430 ************************************ 00:07:02.430 END TEST bdev_json_nonenclosed 00:07:02.430 ************************************ 00:07:02.430 00:07:02.430 real 0m0.401s 00:07:02.430 user 0m0.182s 00:07:02.430 sys 0m0.115s 00:07:02.430 12:48:53 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.430 12:48:53 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:02.430 12:48:53 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.430 12:48:53 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:02.430 12:48:53 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.430 12:48:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.430 ************************************ 00:07:02.430 START TEST bdev_json_nonarray 00:07:02.430 ************************************ 00:07:02.430 12:48:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.430 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:02.430 [2024-08-11 12:48:53.903992] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:02.430 [2024-08-11 12:48:53.904262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72012 ] 00:07:02.709 [2024-08-11 12:48:54.051342] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.709 [2024-08-11 12:48:54.093420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.709 [2024-08-11 12:48:54.093804] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:02.709 [2024-08-11 12:48:54.093864] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:02.709 [2024-08-11 12:48:54.093912] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.709 00:07:02.709 real 0m0.396s 00:07:02.709 user 0m0.184s 00:07:02.709 sys 0m0.109s 00:07:02.709 12:48:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.709 ************************************ 00:07:02.709 END TEST bdev_json_nonarray 00:07:02.709 ************************************ 00:07:02.709 12:48:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:02.709 12:48:54 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:02.709 00:07:02.709 real 0m33.242s 00:07:02.709 user 0m52.280s 00:07:02.709 sys 0m6.004s 00:07:02.709 12:48:54 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.709 ************************************ 00:07:02.709 END TEST blockdev_nvme 00:07:02.709 ************************************ 00:07:02.709 12:48:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.709 12:48:54 -- spdk/autotest.sh@222 -- # uname -s 00:07:02.970 12:48:54 -- spdk/autotest.sh@222 -- # [[ Linux == Linux ]] 00:07:02.970 12:48:54 -- spdk/autotest.sh@223 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:02.970 12:48:54 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:02.970 12:48:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.970 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:07:02.970 ************************************ 00:07:02.970 START TEST blockdev_nvme_gpt 00:07:02.970 ************************************ 00:07:02.970 12:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:02.970 * Looking for test storage... 00:07:02.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:02.970 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72083 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:02.971 12:48:54 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72083 00:07:02.971 12:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 72083 ']' 00:07:02.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.971 12:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.971 12:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.971 12:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.971 12:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.971 12:48:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.971 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:02.971 [2024-08-11 12:48:54.516894] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:02.971 [2024-08-11 12:48:54.517069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72083 ] 00:07:03.231 [2024-08-11 12:48:54.667965] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.231 [2024-08-11 12:48:54.709246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.167 12:48:55 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.167 12:48:55 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:07:04.167 12:48:55 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:04.167 12:48:55 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:04.167 12:48:55 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:04.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:04.683 Waiting for block devices as requested 00:07:04.683 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:04.683 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:04.683 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:04.942 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.210 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:10.210 BYT; 00:07:10.210 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:10.210 BYT; 00:07:10.210 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.210 12:49:01 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.210 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.211 12:49:01 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:11.146 The operation has completed successfully. 00:07:11.146 12:49:02 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:12.082 The operation has completed successfully. 00:07:12.082 12:49:03 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.217 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.217 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.217 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.217 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.217 12:49:04 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:13.217 12:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:13.217 12:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.217 [] 00:07:13.217 12:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:13.217 12:49:04 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:13.217 12:49:04 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:13.217 12:49:04 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:13.217 12:49:04 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:13.476 12:49:04 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:13.476 12:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:13.476 12:49:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:13.735 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:13.735 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:13.736 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8d6f10e3-c82b-48e9-8ef7-ed89e3703115"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8d6f10e3-c82b-48e9-8ef7-ed89e3703115",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5f8787e2-5771-432e-896e-73900d07707c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5f8787e2-5771-432e-896e-73900d07707c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3f2b4db9-e8f3-4e58-a98d-108ef628957d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3f2b4db9-e8f3-4e58-a98d-108ef628957d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c66ca214-33ca-4759-a89a-636a171cf447"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c66ca214-33ca-4759-a89a-636a171cf447",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c35756fe-10fc-49f5-afca-ce786a423538"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c35756fe-10fc-49f5-afca-ce786a423538",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:13.995 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:13.995 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:13.995 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:13.995 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72083 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 72083 ']' 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 72083 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72083 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.995 killing process with pid 72083 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72083' 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 72083 00:07:13.995 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 72083 00:07:14.254 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:14.254 12:49:05 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:14.254 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:14.254 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.254 12:49:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.254 ************************************ 00:07:14.254 START TEST bdev_hello_world 00:07:14.254 ************************************ 00:07:14.254 12:49:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:14.254 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:14.254 [2024-08-11 12:49:05.787501] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:14.254 [2024-08-11 12:49:05.787693] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72698 ] 00:07:14.513 [2024-08-11 12:49:05.933948] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.513 [2024-08-11 12:49:05.965633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.772 [2024-08-11 12:49:06.331139] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:14.772 [2024-08-11 12:49:06.331198] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:14.772 [2024-08-11 12:49:06.331237] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:14.772 [2024-08-11 12:49:06.333431] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:14.772 [2024-08-11 12:49:06.334008] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:14.772 [2024-08-11 12:49:06.334074] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:14.772 [2024-08-11 12:49:06.334269] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:14.772 00:07:14.772 [2024-08-11 12:49:06.334302] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:15.030 00:07:15.030 real 0m0.808s 00:07:15.030 user 0m0.531s 00:07:15.030 sys 0m0.173s 00:07:15.030 12:49:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.030 12:49:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:15.030 ************************************ 00:07:15.030 END TEST bdev_hello_world 00:07:15.030 ************************************ 00:07:15.031 12:49:06 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:15.031 12:49:06 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:15.031 12:49:06 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.031 12:49:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.031 ************************************ 00:07:15.031 START TEST bdev_bounds 00:07:15.031 ************************************ 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72723 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.031 Process bdevio pid: 72723 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72723' 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72723 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 72723 ']' 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.031 12:49:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:15.290 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:15.290 [2024-08-11 12:49:06.637283] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:15.290 [2024-08-11 12:49:06.637408] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72723 ] 00:07:15.290 [2024-08-11 12:49:06.775797] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.290 [2024-08-11 12:49:06.810634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.290 [2024-08-11 12:49:06.810598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.290 [2024-08-11 12:49:06.810733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.227 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.227 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:07:16.227 12:49:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:16.227 I/O targets: 00:07:16.227 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:16.227 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:16.227 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:16.227 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:16.227 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:16.227 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:16.227 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:16.227 00:07:16.227 00:07:16.227 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.227 http://cunit.sourceforge.net/ 00:07:16.227 00:07:16.227 00:07:16.227 Suite: bdevio tests on: Nvme3n1 00:07:16.227 Test: blockdev write read block ...passed 00:07:16.227 Test: blockdev write zeroes read block ...passed 00:07:16.227 Test: blockdev write zeroes read no split ...passed 00:07:16.227 Test: blockdev write zeroes read split ...passed 00:07:16.227 Test: blockdev write zeroes read split partial ...passed 00:07:16.227 Test: blockdev reset ...[2024-08-11 12:49:07.745412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:16.227 passed 00:07:16.227 Test: blockdev write read 8 blocks ...[2024-08-11 12:49:07.747794] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:16.227 passed 00:07:16.227 Test: blockdev write read size > 128k ...passed 00:07:16.227 Test: blockdev write read invalid size ...passed 00:07:16.227 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.227 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.227 Test: blockdev write read max offset ...passed 00:07:16.227 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.227 Test: blockdev writev readv 8 blocks ...passed 00:07:16.227 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.227 Test: blockdev writev readv block ...passed 00:07:16.227 Test: blockdev writev readv size > 128k ...passed 00:07:16.227 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.227 Test: blockdev comparev and writev ...[2024-08-11 12:49:07.754688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1a06000 len:0x1000 00:07:16.227 [2024-08-11 12:49:07.754763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.227 passed 00:07:16.227 Test: blockdev nvme passthru rw ...passed 00:07:16.227 Test: blockdev nvme passthru vendor specific ...[2024-08-11 12:49:07.755798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:16.227 [2024-08-11 12:49:07.755861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:16.227 passed 00:07:16.227 Test: blockdev nvme admin passthru ...passed 00:07:16.227 Test: blockdev copy ...passed 00:07:16.227 Suite: bdevio tests on: Nvme2n3 00:07:16.227 Test: blockdev write read block ...passed 00:07:16.227 Test: blockdev write zeroes read block ...passed 00:07:16.227 Test: blockdev write zeroes read no split ...passed 00:07:16.227 Test: blockdev write zeroes read split ...passed 00:07:16.227 Test: blockdev write zeroes read split partial ...passed 00:07:16.227 Test: blockdev reset ...[2024-08-11 12:49:07.779195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:16.227 passed 00:07:16.227 Test: blockdev write read 8 blocks ...[2024-08-11 12:49:07.781800] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:16.227 passed 00:07:16.227 Test: blockdev write read size > 128k ...passed 00:07:16.227 Test: blockdev write read invalid size ...passed 00:07:16.227 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.227 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.227 Test: blockdev write read max offset ...passed 00:07:16.227 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.227 Test: blockdev writev readv 8 blocks ...passed 00:07:16.227 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.227 Test: blockdev writev readv block ...passed 00:07:16.227 Test: blockdev writev readv size > 128k ...passed 00:07:16.227 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.227 Test: blockdev comparev and writev ...[2024-08-11 12:49:07.788624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac03d000 len:0x1000 00:07:16.227 [2024-08-11 12:49:07.788695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.227 passed 00:07:16.228 Test: blockdev nvme passthru rw ...passed 00:07:16.228 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.228 Test: blockdev nvme admin passthru ...[2024-08-11 12:49:07.789762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:16.228 [2024-08-11 12:49:07.789826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:16.228 passed 00:07:16.228 Test: blockdev copy ...passed 00:07:16.228 Suite: bdevio tests on: Nvme2n2 00:07:16.228 Test: blockdev write read block ...passed 00:07:16.228 Test: blockdev write zeroes read block ...passed 00:07:16.228 Test: blockdev write zeroes read no split ...passed 00:07:16.228 Test: blockdev write zeroes read split ...passed 00:07:16.228 Test: blockdev write zeroes read split partial ...passed 00:07:16.228 Test: blockdev reset ...[2024-08-11 12:49:07.814492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:16.228 [2024-08-11 12:49:07.817144] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:16.228 passed 00:07:16.228 Test: blockdev write read 8 blocks ...passed 00:07:16.228 Test: blockdev write read size > 128k ...passed 00:07:16.228 Test: blockdev write read invalid size ...passed 00:07:16.228 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.228 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.228 Test: blockdev write read max offset ...passed 00:07:16.228 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.228 Test: blockdev writev readv 8 blocks ...passed 00:07:16.228 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.228 Test: blockdev writev readv block ...passed 00:07:16.228 Test: blockdev writev readv size > 128k ...passed 00:07:16.228 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.487 Test: blockdev comparev and writev ...[2024-08-11 12:49:07.824922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac039000 len:0x1000 00:07:16.487 [2024-08-11 12:49:07.825005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.487 passed 00:07:16.487 Test: blockdev nvme passthru rw ...passed 00:07:16.487 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.487 Test: blockdev nvme admin passthru ...[2024-08-11 12:49:07.825848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:16.487 [2024-08-11 12:49:07.825939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:16.487 passed 00:07:16.487 Test: blockdev copy ...passed 00:07:16.487 Suite: bdevio tests on: Nvme2n1 00:07:16.487 Test: blockdev write read block ...passed 00:07:16.487 Test: blockdev write zeroes read block ...passed 00:07:16.487 Test: blockdev write zeroes read no split ...passed 00:07:16.487 Test: blockdev write zeroes read split ...passed 00:07:16.487 Test: blockdev write zeroes read split partial ...passed 00:07:16.487 Test: blockdev reset ...[2024-08-11 12:49:07.849731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:16.487 [2024-08-11 12:49:07.852252] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:16.487 passed 00:07:16.487 Test: blockdev write read 8 blocks ...passed 00:07:16.487 Test: blockdev write read size > 128k ...passed 00:07:16.487 Test: blockdev write read invalid size ...passed 00:07:16.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.487 Test: blockdev write read max offset ...passed 00:07:16.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.487 Test: blockdev writev readv 8 blocks ...passed 00:07:16.487 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.487 Test: blockdev writev readv block ...passed 00:07:16.487 Test: blockdev writev readv size > 128k ...passed 00:07:16.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.487 Test: blockdev comparev and writev ...[2024-08-11 12:49:07.859628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac035000 len:0x1000 00:07:16.487 [2024-08-11 12:49:07.859697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.487 passed 00:07:16.487 Test: blockdev nvme passthru rw ...passed 00:07:16.487 Test: blockdev nvme passthru vendor specific ...[2024-08-11 12:49:07.860632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:16.487 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:16.487 [2024-08-11 12:49:07.860808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:16.487 passed 00:07:16.487 Test: blockdev copy ...passed 00:07:16.487 Suite: bdevio tests on: Nvme1n1p2 00:07:16.487 Test: blockdev write read block ...passed 00:07:16.487 Test: blockdev write zeroes read block ...passed 00:07:16.487 Test: blockdev write zeroes read no split ...passed 00:07:16.487 Test: blockdev write zeroes read split ...passed 00:07:16.487 Test: blockdev write zeroes read split partial ...passed 00:07:16.487 Test: blockdev reset ...[2024-08-11 12:49:07.884371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:16.487 passed 00:07:16.487 Test: blockdev write read 8 blocks ...[2024-08-11 12:49:07.886577] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:16.487 passed 00:07:16.487 Test: blockdev write read size > 128k ...passed 00:07:16.487 Test: blockdev write read invalid size ...passed 00:07:16.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.487 Test: blockdev write read max offset ...passed 00:07:16.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.488 Test: blockdev writev readv 8 blocks ...passed 00:07:16.488 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.488 Test: blockdev writev readv block ...passed 00:07:16.488 Test: blockdev writev readv size > 128k ...passed 00:07:16.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.488 Test: blockdev comparev and writev ...[2024-08-11 12:49:07.893580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ac031000 len:0x1000 00:07:16.488 [2024-08-11 12:49:07.893649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.488 passed 00:07:16.488 Test: blockdev nvme passthru rw ...passed 00:07:16.488 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.488 Test: blockdev nvme admin passthru ...passed 00:07:16.488 Test: blockdev copy ...passed 00:07:16.488 Suite: bdevio tests on: Nvme1n1p1 00:07:16.488 Test: blockdev write read block ...passed 00:07:16.488 Test: blockdev write zeroes read block ...passed 00:07:16.488 Test: blockdev write zeroes read no split ...passed 00:07:16.488 Test: blockdev write zeroes read split ...passed 00:07:16.488 Test: blockdev write zeroes read split partial ...passed 00:07:16.488 Test: blockdev reset ...[2024-08-11 12:49:07.906228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:16.488 [2024-08-11 12:49:07.908363] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:16.488 passed 00:07:16.488 Test: blockdev write read 8 blocks ...passed 00:07:16.488 Test: blockdev write read size > 128k ...passed 00:07:16.488 Test: blockdev write read invalid size ...passed 00:07:16.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.488 Test: blockdev write read max offset ...passed 00:07:16.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.488 Test: blockdev writev readv 8 blocks ...passed 00:07:16.488 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.488 Test: blockdev writev readv block ...passed 00:07:16.488 Test: blockdev writev readv size > 128k ...passed 00:07:16.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.488 Test: blockdev comparev and writev ...[2024-08-11 12:49:07.915215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ac02d000 len:0x1000 00:07:16.488 [2024-08-11 12:49:07.915287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.488 passed 00:07:16.488 Test: blockdev nvme passthru rw ...passed 00:07:16.488 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.488 Test: blockdev nvme admin passthru ...passed 00:07:16.488 Test: blockdev copy ...passed 00:07:16.488 Suite: bdevio tests on: Nvme0n1 00:07:16.488 Test: blockdev write read block ...passed 00:07:16.488 Test: blockdev write zeroes read block ...passed 00:07:16.488 Test: blockdev write zeroes read no split ...passed 00:07:16.488 Test: blockdev write zeroes read split ...passed 00:07:16.488 Test: blockdev write zeroes read split partial ...passed 00:07:16.488 Test: blockdev reset ...[2024-08-11 12:49:07.928200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:16.488 [2024-08-11 12:49:07.930392] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:16.488 passed 00:07:16.488 Test: blockdev write read 8 blocks ...passed 00:07:16.488 Test: blockdev write read size > 128k ...passed 00:07:16.488 Test: blockdev write read invalid size ...passed 00:07:16.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.488 Test: blockdev write read max offset ...passed 00:07:16.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.488 Test: blockdev writev readv 8 blocks ...passed 00:07:16.488 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.488 Test: blockdev writev readv block ...passed 00:07:16.488 Test: blockdev writev readv size > 128k ...passed 00:07:16.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.488 Test: blockdev comparev and writev ...passed 00:07:16.488 Test: blockdev nvme passthru rw ...[2024-08-11 12:49:07.936146] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:16.488 separate metadata which is not supported yet. 00:07:16.488 passed 00:07:16.488 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.488 Test: blockdev nvme admin passthru ...[2024-08-11 12:49:07.936677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:16.488 [2024-08-11 12:49:07.936733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:16.488 passed 00:07:16.488 Test: blockdev copy ...passed 00:07:16.488 00:07:16.488 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.488 suites 7 7 n/a 0 0 00:07:16.488 tests 161 161 161 0 0 00:07:16.488 asserts 1025 1025 1025 0 n/a 00:07:16.488 00:07:16.488 Elapsed time = 0.463 seconds 00:07:16.488 0 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72723 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 72723 ']' 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 72723 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72723 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:16.488 killing process with pid 72723 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72723' 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 72723 00:07:16.488 12:49:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 72723 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:16.747 00:07:16.747 real 0m1.603s 00:07:16.747 user 0m4.241s 00:07:16.747 sys 0m0.288s 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.747 ************************************ 00:07:16.747 END TEST bdev_bounds 00:07:16.747 ************************************ 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:16.747 12:49:08 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:16.747 12:49:08 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:07:16.747 12:49:08 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.747 12:49:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.747 ************************************ 00:07:16.747 START TEST bdev_nbd 00:07:16.747 ************************************ 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:16.747 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72772 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72772 /var/tmp/spdk-nbd.sock 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 72772 ']' 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:16.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:16.748 12:49:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:16.748 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:16.748 [2024-08-11 12:49:08.321268] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:16.748 [2024-08-11 12:49:08.321443] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.007 [2024-08-11 12:49:08.472599] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.007 [2024-08-11 12:49:08.507772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.943 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.202 1+0 records in 00:07:18.202 1+0 records out 00:07:18.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365054 s, 11.2 MB/s 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.202 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.461 1+0 records in 00:07:18.461 1+0 records out 00:07:18.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00088742 s, 4.6 MB/s 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.461 12:49:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.720 1+0 records in 00:07:18.720 1+0 records out 00:07:18.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579301 s, 7.1 MB/s 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.720 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.979 1+0 records in 00:07:18.979 1+0 records out 00:07:18.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839595 s, 4.9 MB/s 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.979 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.238 1+0 records in 00:07:19.238 1+0 records out 00:07:19.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000933338 s, 4.4 MB/s 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.238 12:49:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.497 1+0 records in 00:07:19.497 1+0 records out 00:07:19.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000888805 s, 4.6 MB/s 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.497 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.498 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:19.756 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.015 1+0 records in 00:07:20.016 1+0 records out 00:07:20.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000920394 s, 4.5 MB/s 00:07:20.016 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.016 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:20.016 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.016 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:20.016 12:49:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:20.016 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.016 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.016 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd0", 00:07:20.275 "bdev_name": "Nvme0n1" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd1", 00:07:20.275 "bdev_name": "Nvme1n1p1" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd2", 00:07:20.275 "bdev_name": "Nvme1n1p2" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd3", 00:07:20.275 "bdev_name": "Nvme2n1" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd4", 00:07:20.275 "bdev_name": "Nvme2n2" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd5", 00:07:20.275 "bdev_name": "Nvme2n3" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd6", 00:07:20.275 "bdev_name": "Nvme3n1" 00:07:20.275 } 00:07:20.275 ]' 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd0", 00:07:20.275 "bdev_name": "Nvme0n1" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd1", 00:07:20.275 "bdev_name": "Nvme1n1p1" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd2", 00:07:20.275 "bdev_name": "Nvme1n1p2" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd3", 00:07:20.275 "bdev_name": "Nvme2n1" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd4", 00:07:20.275 "bdev_name": "Nvme2n2" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd5", 00:07:20.275 "bdev_name": "Nvme2n3" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd6", 00:07:20.275 "bdev_name": "Nvme3n1" 00:07:20.275 } 00:07:20.275 ]' 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.275 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.534 12:49:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.793 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.053 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.311 12:49:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.570 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.837 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.097 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.356 12:49:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:22.615 /dev/nbd0 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:22.874 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.875 1+0 records in 00:07:22.875 1+0 records out 00:07:22.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617002 s, 6.6 MB/s 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.875 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:22.875 /dev/nbd1 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.134 1+0 records in 00:07:23.134 1+0 records out 00:07:23.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526903 s, 7.8 MB/s 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.134 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:23.392 /dev/nbd10 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.392 1+0 records in 00:07:23.392 1+0 records out 00:07:23.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495312 s, 8.3 MB/s 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.392 12:49:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:23.651 /dev/nbd11 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.652 1+0 records in 00:07:23.652 1+0 records out 00:07:23.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468069 s, 8.8 MB/s 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.652 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:23.911 /dev/nbd12 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.911 1+0 records in 00:07:23.911 1+0 records out 00:07:23.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000940676 s, 4.4 MB/s 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.911 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:24.170 /dev/nbd13 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.170 1+0 records in 00:07:24.170 1+0 records out 00:07:24.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007889 s, 5.2 MB/s 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.170 12:49:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:24.436 /dev/nbd14 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.436 1+0 records in 00:07:24.436 1+0 records out 00:07:24.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657618 s, 6.2 MB/s 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:07:24.436 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.702 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:24.702 12:49:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:07:24.702 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.702 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.702 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.702 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.702 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd0", 00:07:24.961 "bdev_name": "Nvme0n1" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd1", 00:07:24.961 "bdev_name": "Nvme1n1p1" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd10", 00:07:24.961 "bdev_name": "Nvme1n1p2" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd11", 00:07:24.961 "bdev_name": "Nvme2n1" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd12", 00:07:24.961 "bdev_name": "Nvme2n2" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd13", 00:07:24.961 "bdev_name": "Nvme2n3" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd14", 00:07:24.961 "bdev_name": "Nvme3n1" 00:07:24.961 } 00:07:24.961 ]' 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd0", 00:07:24.961 "bdev_name": "Nvme0n1" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd1", 00:07:24.961 "bdev_name": "Nvme1n1p1" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd10", 00:07:24.961 "bdev_name": "Nvme1n1p2" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd11", 00:07:24.961 "bdev_name": "Nvme2n1" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd12", 00:07:24.961 "bdev_name": "Nvme2n2" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd13", 00:07:24.961 "bdev_name": "Nvme2n3" 00:07:24.961 }, 00:07:24.961 { 00:07:24.961 "nbd_device": "/dev/nbd14", 00:07:24.961 "bdev_name": "Nvme3n1" 00:07:24.961 } 00:07:24.961 ]' 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.961 /dev/nbd1 00:07:24.961 /dev/nbd10 00:07:24.961 /dev/nbd11 00:07:24.961 /dev/nbd12 00:07:24.961 /dev/nbd13 00:07:24.961 /dev/nbd14' 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.961 /dev/nbd1 00:07:24.961 /dev/nbd10 00:07:24.961 /dev/nbd11 00:07:24.961 /dev/nbd12 00:07:24.961 /dev/nbd13 00:07:24.961 /dev/nbd14' 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:24.961 256+0 records in 00:07:24.961 256+0 records out 00:07:24.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766367 s, 137 MB/s 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.961 256+0 records in 00:07:24.961 256+0 records out 00:07:24.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161614 s, 6.5 MB/s 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.961 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:25.220 256+0 records in 00:07:25.220 256+0 records out 00:07:25.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203136 s, 5.2 MB/s 00:07:25.220 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.220 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:25.478 256+0 records in 00:07:25.478 256+0 records out 00:07:25.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190794 s, 5.5 MB/s 00:07:25.478 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.478 12:49:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:25.737 256+0 records in 00:07:25.737 256+0 records out 00:07:25.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186652 s, 5.6 MB/s 00:07:25.737 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.737 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:25.737 256+0 records in 00:07:25.737 256+0 records out 00:07:25.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186532 s, 5.6 MB/s 00:07:25.737 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.737 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:25.996 256+0 records in 00:07:25.996 256+0 records out 00:07:25.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180813 s, 5.8 MB/s 00:07:25.996 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.996 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:26.255 256+0 records in 00:07:26.255 256+0 records out 00:07:26.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187818 s, 5.6 MB/s 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:26.255 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.256 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:26.256 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.256 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:26.256 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.256 12:49:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.823 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.081 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.339 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.598 12:49:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.856 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.115 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.373 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.632 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.632 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.632 12:49:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:28.632 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:28.891 malloc_lvol_verify 00:07:28.891 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:29.149 e96f07b7-0de7-40b9-b4fb-b02f96d3f7a3 00:07:29.149 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:29.408 c7aef46d-6e39-4af4-aa9e-78d9b2c5fca5 00:07:29.408 12:49:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:29.666 /dev/nbd0 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:29.666 mke2fs 1.47.0 (5-Feb-2023) 00:07:29.666 Discarding device blocks: 0/4096 done 00:07:29.666 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:29.666 00:07:29.666 Allocating group tables: 0/1 done 00:07:29.666 Writing inode tables: 0/1 done 00:07:29.666 Creating journal (1024 blocks): done 00:07:29.666 Writing superblocks and filesystem accounting information: 0/1 done 00:07:29.666 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.666 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72772 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 72772 ']' 00:07:29.924 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 72772 00:07:29.925 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:07:29.925 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:29.925 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72772 00:07:29.925 killing process with pid 72772 00:07:29.925 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:29.925 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:29.925 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72772' 00:07:29.925 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 72772 00:07:29.925 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 72772 00:07:30.183 ************************************ 00:07:30.183 END TEST bdev_nbd 00:07:30.183 ************************************ 00:07:30.183 12:49:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:30.183 00:07:30.183 real 0m13.483s 00:07:30.183 user 0m19.706s 00:07:30.183 sys 0m4.508s 00:07:30.183 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.183 12:49:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:30.183 12:49:21 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:30.183 12:49:21 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:30.183 skipping fio tests on NVMe due to multi-ns failures. 00:07:30.183 12:49:21 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:30.183 12:49:21 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:30.183 12:49:21 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:30.183 12:49:21 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:30.183 12:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:07:30.183 12:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.183 12:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.183 ************************************ 00:07:30.183 START TEST bdev_verify 00:07:30.183 ************************************ 00:07:30.183 12:49:21 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:30.442 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:30.442 [2024-08-11 12:49:21.855929] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:30.442 [2024-08-11 12:49:21.856155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73210 ] 00:07:30.442 [2024-08-11 12:49:22.001395] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.442 [2024-08-11 12:49:22.036184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.442 [2024-08-11 12:49:22.036249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.010 Running I/O for 5 seconds... 00:07:36.280 00:07:36.280 Latency(us) 00:07:36.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.280 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x0 length 0xbd0bd 00:07:36.280 Nvme0n1 : 5.05 1342.56 5.24 0.00 0.00 95021.80 20733.21 91988.71 00:07:36.280 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:36.280 Nvme0n1 : 5.04 1218.06 4.76 0.00 0.00 104619.60 24784.52 91035.46 00:07:36.280 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x0 length 0x4ff80 00:07:36.280 Nvme1n1p1 : 5.05 1342.11 5.24 0.00 0.00 94868.68 19541.64 85315.96 00:07:36.280 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:36.280 Nvme1n1p1 : 5.09 1219.75 4.76 0.00 0.00 104132.53 11319.85 86745.83 00:07:36.280 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x0 length 0x4ff7f 00:07:36.280 Nvme1n1p2 : 5.06 1341.75 5.24 0.00 0.00 94662.33 18588.39 76260.07 00:07:36.280 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:36.280 Nvme1n1p2 : 5.09 1219.23 4.76 0.00 0.00 103916.39 10902.81 82456.20 00:07:36.280 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x0 length 0x80000 00:07:36.280 Nvme2n1 : 5.08 1347.46 5.26 0.00 0.00 94018.49 7506.85 72447.07 00:07:36.280 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x80000 length 0x80000 00:07:36.280 Nvme2n1 : 5.10 1228.84 4.80 0.00 0.00 103320.04 8877.15 80549.70 00:07:36.280 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x0 length 0x80000 00:07:36.280 Nvme2n2 : 5.09 1357.16 5.30 0.00 0.00 93380.43 7626.01 75306.82 00:07:36.280 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x80000 length 0x80000 00:07:36.280 Nvme2n2 : 5.11 1228.41 4.80 0.00 0.00 103106.94 8936.73 83409.45 00:07:36.280 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x0 length 0x80000 00:07:36.280 Nvme2n3 : 5.09 1356.78 5.30 0.00 0.00 93208.59 6940.86 78166.57 00:07:36.280 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x80000 length 0x80000 00:07:36.280 Nvme2n3 : 5.11 1227.99 4.80 0.00 0.00 102923.58 9234.62 86269.21 00:07:36.280 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x0 length 0x20000 00:07:36.280 Nvme3n1 : 5.10 1356.33 5.30 0.00 0.00 93036.41 6821.70 81026.33 00:07:36.280 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.280 Verification LBA range: start 0x20000 length 0x20000 00:07:36.280 Nvme3n1 : 5.11 1227.57 4.80 0.00 0.00 102737.12 9413.35 90082.21 00:07:36.280 =================================================================================================================== 00:07:36.280 Total : 18013.99 70.37 0.00 0.00 98554.12 6821.70 91988.71 00:07:36.539 00:07:36.539 real 0m6.241s 00:07:36.539 user 0m11.670s 00:07:36.539 sys 0m0.217s 00:07:36.539 12:49:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.539 12:49:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.539 ************************************ 00:07:36.539 END TEST bdev_verify 00:07:36.539 ************************************ 00:07:36.539 12:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:36.539 12:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:07:36.539 12:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.539 12:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.539 ************************************ 00:07:36.539 START TEST bdev_verify_big_io 00:07:36.539 ************************************ 00:07:36.539 12:49:28 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:36.798 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:36.798 [2024-08-11 12:49:28.156764] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:36.798 [2024-08-11 12:49:28.156969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73297 ] 00:07:36.798 [2024-08-11 12:49:28.303160] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.798 [2024-08-11 12:49:28.337475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.798 [2024-08-11 12:49:28.337549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.365 Running I/O for 5 seconds... 00:07:43.926 00:07:43.926 Latency(us) 00:07:43.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.927 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x0 length 0xbd0b 00:07:43.927 Nvme0n1 : 5.99 90.87 5.68 0.00 0.00 1336046.75 20614.05 1250665.19 00:07:43.927 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:43.927 Nvme0n1 : 5.84 90.45 5.65 0.00 0.00 1352180.66 27167.65 1517575.45 00:07:43.927 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x0 length 0x4ff8 00:07:43.927 Nvme1n1p1 : 5.89 91.70 5.73 0.00 0.00 1300193.20 112006.98 1082893.03 00:07:43.927 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:43.927 Nvme1n1p1 : 5.99 88.97 5.56 0.00 0.00 1349251.10 65774.31 1967509.88 00:07:43.927 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x0 length 0x4ff7 00:07:43.927 Nvme1n1p2 : 6.10 92.13 5.76 0.00 0.00 1252114.15 96754.97 1494697.43 00:07:43.927 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:43.927 Nvme1n1p2 : 5.91 97.30 6.08 0.00 0.00 1208996.94 66250.94 1098145.05 00:07:43.927 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x0 length 0x8000 00:07:43.927 Nvme2n1 : 5.99 87.61 5.48 0.00 0.00 1290682.21 97231.59 2287802.18 00:07:43.927 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x8000 length 0x8000 00:07:43.927 Nvme2n1 : 5.91 97.43 6.09 0.00 0.00 1167855.24 66250.94 1136275.08 00:07:43.927 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x0 length 0x8000 00:07:43.927 Nvme2n2 : 6.10 91.44 5.71 0.00 0.00 1194580.56 98661.47 2318306.21 00:07:43.927 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x8000 length 0x8000 00:07:43.927 Nvme2n2 : 6.05 101.27 6.33 0.00 0.00 1080655.69 77689.95 1174405.12 00:07:43.927 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x0 length 0x8000 00:07:43.927 Nvme2n3 : 6.13 101.57 6.35 0.00 0.00 1051375.12 5362.04 2379314.27 00:07:43.927 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x8000 length 0x8000 00:07:43.927 Nvme2n3 : 6.09 109.11 6.82 0.00 0.00 979576.94 35985.22 1204909.15 00:07:43.927 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x0 length 0x2000 00:07:43.927 Nvme3n1 : 6.14 105.85 6.62 0.00 0.00 976004.79 2561.86 2089525.99 00:07:43.927 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:43.927 Verification LBA range: start 0x2000 length 0x2000 00:07:43.927 Nvme3n1 : 6.11 120.69 7.54 0.00 0.00 862249.22 3991.74 1258291.20 00:07:43.927 =================================================================================================================== 00:07:43.927 Total : 1366.39 85.40 0.00 0.00 1156458.16 2561.86 2379314.27 00:07:43.927 00:07:43.927 real 0m7.419s 00:07:43.927 user 0m13.975s 00:07:43.927 sys 0m0.243s 00:07:43.927 12:49:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.927 12:49:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:43.927 ************************************ 00:07:43.927 END TEST bdev_verify_big_io 00:07:43.927 ************************************ 00:07:44.186 12:49:35 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.186 12:49:35 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:44.186 12:49:35 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.186 12:49:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.186 ************************************ 00:07:44.186 START TEST bdev_write_zeroes 00:07:44.186 ************************************ 00:07:44.186 12:49:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.186 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:44.186 [2024-08-11 12:49:35.627590] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:44.186 [2024-08-11 12:49:35.627754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73401 ] 00:07:44.186 [2024-08-11 12:49:35.772443] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.445 [2024-08-11 12:49:35.814145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.703 Running I/O for 1 seconds... 00:07:46.077 00:07:46.077 Latency(us) 00:07:46.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.077 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.077 Nvme0n1 : 1.02 6900.84 26.96 0.00 0.00 18463.54 14358.34 28359.21 00:07:46.077 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.077 Nvme1n1p1 : 1.02 6889.41 26.91 0.00 0.00 18458.74 14537.08 29074.15 00:07:46.077 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.077 Nvme1n1p2 : 1.02 6878.20 26.87 0.00 0.00 18407.45 14239.19 27644.28 00:07:46.077 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.077 Nvme2n1 : 1.03 6917.28 27.02 0.00 0.00 18283.43 11319.85 24546.21 00:07:46.077 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.077 Nvme2n2 : 1.03 6906.90 26.98 0.00 0.00 18253.62 10247.45 25380.31 00:07:46.077 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.077 Nvme2n3 : 1.03 6896.63 26.94 0.00 0.00 18234.70 10009.13 25737.77 00:07:46.077 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.077 Nvme3n1 : 1.03 6886.46 26.90 0.00 0.00 18219.81 9234.62 26095.24 00:07:46.077 =================================================================================================================== 00:07:46.077 Total : 48275.72 188.58 0.00 0.00 18331.18 9234.62 29074.15 00:07:46.077 00:07:46.077 real 0m1.972s 00:07:46.077 user 0m1.655s 00:07:46.077 sys 0m0.199s 00:07:46.077 12:49:37 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.077 ************************************ 00:07:46.077 END TEST bdev_write_zeroes 00:07:46.077 ************************************ 00:07:46.077 12:49:37 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:46.077 12:49:37 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.077 12:49:37 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:46.077 12:49:37 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.077 12:49:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.077 ************************************ 00:07:46.077 START TEST bdev_json_nonenclosed 00:07:46.077 ************************************ 00:07:46.077 12:49:37 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.077 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:46.077 [2024-08-11 12:49:37.663382] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:46.077 [2024-08-11 12:49:37.663589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73443 ] 00:07:46.336 [2024-08-11 12:49:37.814422] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.336 [2024-08-11 12:49:37.861220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.336 [2024-08-11 12:49:37.861353] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:46.336 [2024-08-11 12:49:37.861412] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.336 [2024-08-11 12:49:37.861432] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.595 00:07:46.595 real 0m0.416s 00:07:46.595 user 0m0.196s 00:07:46.595 sys 0m0.115s 00:07:46.595 12:49:37 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.595 12:49:37 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:46.595 ************************************ 00:07:46.595 END TEST bdev_json_nonenclosed 00:07:46.595 ************************************ 00:07:46.595 12:49:38 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.595 12:49:38 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:46.595 12:49:38 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.595 12:49:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.595 ************************************ 00:07:46.595 START TEST bdev_json_nonarray 00:07:46.595 ************************************ 00:07:46.595 12:49:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.595 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:46.595 [2024-08-11 12:49:38.136360] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:46.595 [2024-08-11 12:49:38.136557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73468 ] 00:07:46.854 [2024-08-11 12:49:38.288026] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.854 [2024-08-11 12:49:38.336394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.854 [2024-08-11 12:49:38.336534] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:46.854 [2024-08-11 12:49:38.336580] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.854 [2024-08-11 12:49:38.336599] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.112 00:07:47.112 real 0m0.420s 00:07:47.112 user 0m0.190s 00:07:47.112 sys 0m0.125s 00:07:47.112 ************************************ 00:07:47.112 END TEST bdev_json_nonarray 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:47.112 ************************************ 00:07:47.112 12:49:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:47.112 12:49:38 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:47.112 12:49:38 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:47.112 12:49:38 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.112 12:49:38 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.112 12:49:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.112 ************************************ 00:07:47.112 START TEST bdev_gpt_uuid 00:07:47.112 ************************************ 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73494 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73494 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 73494 ']' 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.112 12:49:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.112 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:47.112 [2024-08-11 12:49:38.637839] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:47.112 [2024-08-11 12:49:38.638072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73494 ] 00:07:47.370 [2024-08-11 12:49:38.792793] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.370 [2024-08-11 12:49:38.841268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.629 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.629 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:07:47.629 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.629 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:47.629 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.888 Some configs were skipped because the RPC state that can call them passed over. 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:47.888 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:47.888 { 00:07:47.888 "name": "Nvme1n1p1", 00:07:47.888 "aliases": [ 00:07:47.888 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:47.888 ], 00:07:47.888 "product_name": "GPT Disk", 00:07:47.888 "block_size": 4096, 00:07:47.888 "num_blocks": 655104, 00:07:47.888 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:47.888 "assigned_rate_limits": { 00:07:47.889 "rw_ios_per_sec": 0, 00:07:47.889 "rw_mbytes_per_sec": 0, 00:07:47.889 "r_mbytes_per_sec": 0, 00:07:47.889 "w_mbytes_per_sec": 0 00:07:47.889 }, 00:07:47.889 "claimed": false, 00:07:47.889 "zoned": false, 00:07:47.889 "supported_io_types": { 00:07:47.889 "read": true, 00:07:47.889 "write": true, 00:07:47.889 "unmap": true, 00:07:47.889 "flush": true, 00:07:47.889 "reset": true, 00:07:47.889 "nvme_admin": false, 00:07:47.889 "nvme_io": false, 00:07:47.889 "nvme_io_md": false, 00:07:47.889 "write_zeroes": true, 00:07:47.889 "zcopy": false, 00:07:47.889 "get_zone_info": false, 00:07:47.889 "zone_management": false, 00:07:47.889 "zone_append": false, 00:07:47.889 "compare": true, 00:07:47.889 "compare_and_write": false, 00:07:47.889 "abort": true, 00:07:47.889 "seek_hole": false, 00:07:47.889 "seek_data": false, 00:07:47.889 "copy": true, 00:07:47.889 "nvme_iov_md": false 00:07:47.889 }, 00:07:47.889 "driver_specific": { 00:07:47.889 "gpt": { 00:07:47.889 "base_bdev": "Nvme1n1", 00:07:47.889 "offset_blocks": 256, 00:07:47.889 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:47.889 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:47.889 "partition_name": "SPDK_TEST_first" 00:07:47.889 } 00:07:47.889 } 00:07:47.889 } 00:07:47.889 ]' 00:07:47.889 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:47.889 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:47.889 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:48.147 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:48.147 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:48.147 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:48.147 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:48.147 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:48.147 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.147 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:48.147 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:48.147 { 00:07:48.147 "name": "Nvme1n1p2", 00:07:48.147 "aliases": [ 00:07:48.147 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:48.147 ], 00:07:48.147 "product_name": "GPT Disk", 00:07:48.147 "block_size": 4096, 00:07:48.147 "num_blocks": 655103, 00:07:48.147 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:48.147 "assigned_rate_limits": { 00:07:48.147 "rw_ios_per_sec": 0, 00:07:48.147 "rw_mbytes_per_sec": 0, 00:07:48.147 "r_mbytes_per_sec": 0, 00:07:48.147 "w_mbytes_per_sec": 0 00:07:48.147 }, 00:07:48.147 "claimed": false, 00:07:48.147 "zoned": false, 00:07:48.147 "supported_io_types": { 00:07:48.147 "read": true, 00:07:48.147 "write": true, 00:07:48.147 "unmap": true, 00:07:48.147 "flush": true, 00:07:48.147 "reset": true, 00:07:48.147 "nvme_admin": false, 00:07:48.147 "nvme_io": false, 00:07:48.147 "nvme_io_md": false, 00:07:48.147 "write_zeroes": true, 00:07:48.147 "zcopy": false, 00:07:48.147 "get_zone_info": false, 00:07:48.148 "zone_management": false, 00:07:48.148 "zone_append": false, 00:07:48.148 "compare": true, 00:07:48.148 "compare_and_write": false, 00:07:48.148 "abort": true, 00:07:48.148 "seek_hole": false, 00:07:48.148 "seek_data": false, 00:07:48.148 "copy": true, 00:07:48.148 "nvme_iov_md": false 00:07:48.148 }, 00:07:48.148 "driver_specific": { 00:07:48.148 "gpt": { 00:07:48.148 "base_bdev": "Nvme1n1", 00:07:48.148 "offset_blocks": 655360, 00:07:48.148 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:48.148 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:48.148 "partition_name": "SPDK_TEST_second" 00:07:48.148 } 00:07:48.148 } 00:07:48.148 } 00:07:48.148 ]' 00:07:48.148 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:48.148 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:48.148 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:48.148 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:48.148 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 73494 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 73494 ']' 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 73494 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73494 00:07:48.407 killing process with pid 73494 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73494' 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 73494 00:07:48.407 12:49:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 73494 00:07:48.665 ************************************ 00:07:48.665 END TEST bdev_gpt_uuid 00:07:48.665 ************************************ 00:07:48.665 00:07:48.665 real 0m1.651s 00:07:48.665 user 0m1.965s 00:07:48.665 sys 0m0.393s 00:07:48.665 12:49:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.665 12:49:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:48.665 12:49:40 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:49.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.232 Waiting for block devices as requested 00:07:49.232 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.491 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.491 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.491 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.759 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:54.759 12:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:54.759 12:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:55.018 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:55.018 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:55.018 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:55.018 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:55.018 12:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:55.018 00:07:55.018 real 0m52.117s 00:07:55.018 user 1m7.039s 00:07:55.018 sys 0m9.263s 00:07:55.018 12:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.018 ************************************ 00:07:55.018 END TEST blockdev_nvme_gpt 00:07:55.018 ************************************ 00:07:55.018 12:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.018 12:49:46 -- spdk/autotest.sh@225 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:55.018 12:49:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:55.018 12:49:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.018 12:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:55.018 ************************************ 00:07:55.018 START TEST nvme 00:07:55.018 ************************************ 00:07:55.018 12:49:46 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:55.018 * Looking for test storage... 00:07:55.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:55.018 12:49:46 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:55.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:56.153 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.153 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.153 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.153 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.412 12:49:47 nvme -- nvme/nvme.sh@79 -- # uname 00:07:56.412 12:49:47 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:56.412 12:49:47 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:56.412 12:49:47 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:07:56.412 Waiting for stub to ready for secondary processes... 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1067 -- # stubpid=74108 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/74108 ]] 00:07:56.412 12:49:47 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:07:56.412 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:56.412 [2024-08-11 12:49:47.854748] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:56.412 [2024-08-11 12:49:47.855080] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:57.349 [2024-08-11 12:49:48.601103] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.349 [2024-08-11 12:49:48.630979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.349 [2024-08-11 12:49:48.631062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.349 [2024-08-11 12:49:48.631124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.349 [2024-08-11 12:49:48.647081] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:57.349 [2024-08-11 12:49:48.647155] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:57.349 [2024-08-11 12:49:48.662206] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:57.349 [2024-08-11 12:49:48.662465] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:57.349 [2024-08-11 12:49:48.663479] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:57.349 [2024-08-11 12:49:48.663786] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:57.349 [2024-08-11 12:49:48.663960] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:57.349 [2024-08-11 12:49:48.665509] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:57.349 [2024-08-11 12:49:48.665950] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:57.349 [2024-08-11 12:49:48.666134] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:57.349 [2024-08-11 12:49:48.667626] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:57.349 [2024-08-11 12:49:48.667994] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:57.349 [2024-08-11 12:49:48.668158] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:57.349 [2024-08-11 12:49:48.668262] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:57.349 [2024-08-11 12:49:48.668374] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:57.349 done. 00:07:57.349 12:49:48 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:57.349 12:49:48 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:07:57.349 12:49:48 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:57.349 12:49:48 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:57.349 12:49:48 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.349 12:49:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.349 ************************************ 00:07:57.349 START TEST nvme_reset 00:07:57.349 ************************************ 00:07:57.349 12:49:48 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:57.349 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:57.607 Initializing NVMe Controllers 00:07:57.607 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:57.607 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:57.608 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:57.608 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:57.608 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:57.608 ************************************ 00:07:57.608 END TEST nvme_reset 00:07:57.608 ************************************ 00:07:57.608 00:07:57.608 real 0m0.256s 00:07:57.608 user 0m0.083s 00:07:57.608 sys 0m0.127s 00:07:57.608 12:49:49 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.608 12:49:49 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:57.608 12:49:49 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:57.608 12:49:49 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:57.608 12:49:49 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.608 12:49:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.608 ************************************ 00:07:57.608 START TEST nvme_identify 00:07:57.608 ************************************ 00:07:57.608 12:49:49 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:07:57.608 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:57.608 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:57.608 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:57.608 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:57.608 12:49:49 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:07:57.608 12:49:49 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:07:57.608 12:49:49 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:57.608 12:49:49 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.608 12:49:49 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:07:57.608 12:49:49 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:07:57.608 12:49:49 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:57.608 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:57.869 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:57.869 ===================================================== 00:07:57.869 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:57.869 ===================================================== 00:07:57.869 Controller Capabilities/Features 00:07:57.869 ================================ 00:07:57.869 Vendor ID: 1b36 00:07:57.869 Subsystem Vendor ID: 1af4 00:07:57.869 Serial Number: 12340 00:07:57.869 Model Number: QEMU NVMe Ctrl 00:07:57.869 Firmware Version: 8.0.0 00:07:57.869 Recommended Arb Burst: 6 00:07:57.869 IEEE OUI Identifier: 00 54 52 00:07:57.869 Multi-path I/O 00:07:57.869 May have multiple subsystem ports: No 00:07:57.869 May have multiple controllers: No 00:07:57.869 Associated with SR-IOV VF: No 00:07:57.869 Max Data Transfer Size: 524288 00:07:57.869 Max Number of Namespaces: 256 00:07:57.869 Max Number of I/O Queues: 64 00:07:57.869 NVMe Specification Version (VS): 1.4 00:07:57.869 NVMe Specification Version (Identify): 1.4 00:07:57.869 Maximum Queue Entries: 2048 00:07:57.869 Contiguous Queues Required: Yes 00:07:57.869 Arbitration Mechanisms Supported 00:07:57.869 Weighted Round Robin: Not Supported 00:07:57.869 Vendor Specific: Not Supported 00:07:57.869 Reset Timeout: 7500 ms 00:07:57.869 Doorbell Stride: 4 bytes 00:07:57.869 NVM Subsystem Reset: Not Supported 00:07:57.869 Command Sets Supported 00:07:57.869 NVM Command Set: Supported 00:07:57.869 Boot Partition: Not Supported 00:07:57.869 Memory Page Size Minimum: 4096 bytes 00:07:57.869 Memory Page Size Maximum: 65536 bytes 00:07:57.869 Persistent Memory Region: Not Supported 00:07:57.869 Optional Asynchronous Events Supported 00:07:57.869 Namespace Attribute Notices: Supported 00:07:57.869 Firmware Activation Notices: Not Supported 00:07:57.869 ANA Change Notices: Not Supported 00:07:57.869 PLE Aggregate Log Change Notices: Not Supported 00:07:57.869 LBA Status Info Alert Notices: Not Supported 00:07:57.869 EGE Aggregate Log Change Notices: Not Supported 00:07:57.869 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.869 Zone Descriptor Change Notices: Not Supported 00:07:57.869 Discovery Log Change Notices: Not Supported 00:07:57.869 Controller Attributes 00:07:57.869 128-bit Host Identifier: Not Supported 00:07:57.869 Non-Operational Permissive Mode: Not Supported 00:07:57.869 NVM Sets: Not Supported 00:07:57.869 Read Recovery Levels: Not Supported 00:07:57.869 Endurance Groups: Not Supported 00:07:57.869 Predictable Latency Mode: Not Supported 00:07:57.869 Traffic Based Keep ALive: Not Supported 00:07:57.869 Namespace Granularity: Not Supported 00:07:57.869 SQ Associations: Not Supported 00:07:57.869 UUID List: Not Supported 00:07:57.869 Multi-Domain Subsystem: Not Supported 00:07:57.869 Fixed Capacity Management: Not Supported 00:07:57.869 Variable Capacity Management: Not Supported 00:07:57.869 Delete Endurance Group: Not Supported 00:07:57.869 Delete NVM Set: Not Supported 00:07:57.869 Extended LBA Formats Supported: Supported 00:07:57.869 Flexible Data Placement Supported: Not Supported 00:07:57.869 00:07:57.869 Controller Memory Buffer Support 00:07:57.869 ================================ 00:07:57.870 Supported: No 00:07:57.870 00:07:57.870 Persistent Memory Region Support 00:07:57.870 ================================ 00:07:57.870 Supported: No 00:07:57.870 00:07:57.870 Admin Command Set Attributes 00:07:57.870 ============================ 00:07:57.870 Security Send/Receive: Not Supported 00:07:57.870 Format NVM: Supported 00:07:57.870 Firmware Activate/Download: Not Supported 00:07:57.870 Namespace Management: Supported 00:07:57.870 Device Self-Test: Not Supported 00:07:57.870 Directives: Supported 00:07:57.870 NVMe-MI: Not Supported 00:07:57.870 Virtualization Management: Not Supported 00:07:57.870 Doorbell Buffer Config: Supported 00:07:57.870 Get LBA Status Capability: Not Supported 00:07:57.870 Command & Feature Lockdown Capability: Not Supported 00:07:57.870 Abort Command Limit: 4 00:07:57.870 Async Event Request Limit: 4 00:07:57.870 Number of Firmware Slots: N/A 00:07:57.870 Firmware Slot 1 Read-Only: N/A 00:07:57.870 Firmware Activation Without Reset: N/A 00:07:57.870 Multiple Update Detection Support: N/A 00:07:57.870 Firmware Update Granularity: No Information Provided 00:07:57.870 Per-Namespace SMART Log: Yes 00:07:57.870 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.870 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:57.870 Command Effects Log Page: Supported 00:07:57.870 Get Log Page Extended Data: Supported 00:07:57.870 Telemetry Log Pages: Not Supported 00:07:57.870 Persistent Event Log Pages: Not Supported 00:07:57.870 Supported Log Pages Log Page: May Support 00:07:57.870 Commands Supported & Effects Log Page: Not Supported 00:07:57.870 Feature Identifiers & Effects Log Page:May Support 00:07:57.870 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.870 Data Area 4 for Telemetry Log: Not Supported 00:07:57.870 Error Log Page Entries Supported: 1 00:07:57.870 Keep Alive: Not Supported 00:07:57.870 00:07:57.870 NVM Command Set Attributes 00:07:57.870 ========================== 00:07:57.870 Submission Queue Entry Size 00:07:57.870 Max: 64 00:07:57.870 Min: 64 00:07:57.870 Completion Queue Entry Size 00:07:57.870 Max: 16 00:07:57.870 Min: 16 00:07:57.870 Number of Namespaces: 256 00:07:57.870 Compare Command: Supported 00:07:57.870 Write Uncorrectable Command: Not Supported 00:07:57.870 Dataset Management Command: Supported 00:07:57.870 Write Zeroes Command: Supported 00:07:57.870 Set Features Save Field: Supported 00:07:57.870 Reservations: Not Supported 00:07:57.870 Timestamp: Supported 00:07:57.870 Copy: Supported 00:07:57.870 Volatile Write Cache: Present 00:07:57.870 Atomic Write Unit (Normal): 1 00:07:57.870 Atomic Write Unit (PFail): 1 00:07:57.870 Atomic Compare & Write Unit: 1 00:07:57.870 Fused Compare & Write: Not Supported 00:07:57.870 Scatter-Gather List 00:07:57.870 SGL Command Set: Supported 00:07:57.870 SGL Keyed: Not Supported 00:07:57.870 SGL Bit Bucket Descriptor: Not Supported 00:07:57.870 SGL Metadata Pointer: Not Supported 00:07:57.870 Oversized SGL: Not Supported 00:07:57.870 SGL Metadata Address: Not Supported 00:07:57.870 SGL Offset: Not Supported 00:07:57.870 Transport SGL Data Block: Not Supported 00:07:57.870 Replay Protected Memory Block: Not Supported 00:07:57.870 00:07:57.870 Firmware Slot Information 00:07:57.870 ========================= 00:07:57.870 Active slot: 1 00:07:57.870 Slot 1 Firmware Revision: 1.0 00:07:57.870 00:07:57.870 00:07:57.870 Commands Supported and Effects 00:07:57.870 ============================== 00:07:57.870 Admin Commands 00:07:57.870 -------------- 00:07:57.870 Delete I/O Submission Queue (00h): Supported 00:07:57.870 Create I/O Submission Queue (01h): Supported 00:07:57.870 Get Log Page (02h): Supported 00:07:57.870 Delete I/O Completion Queue (04h): Supported 00:07:57.870 Create I/O Completion Queue (05h): Supported 00:07:57.870 Identify (06h): Supported 00:07:57.870 Abort (08h): Supported 00:07:57.870 Set Features (09h): Supported 00:07:57.870 Get Features (0Ah): Supported 00:07:57.870 Asynchronous Event Request (0Ch): Supported 00:07:57.870 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.870 Directive Send (19h): Supported 00:07:57.870 Directive Receive (1Ah): Supported 00:07:57.870 Virtualization Management (1Ch): Supported 00:07:57.870 Doorbell Buffer Config (7Ch): Supported 00:07:57.870 Format NVM (80h): Supported LBA-Change 00:07:57.870 I/O Commands 00:07:57.870 ------------ 00:07:57.870 Flush (00h): Supported LBA-Change 00:07:57.870 Write (01h): Supported LBA-Change 00:07:57.870 Read (02h): Supported 00:07:57.870 Compare (05h): Supported 00:07:57.870 Write Zeroes (08h): Supported LBA-Change 00:07:57.870 Dataset Management (09h): Supported LBA-Change 00:07:57.870 Unknown (0Ch): Supported 00:07:57.870 Unknown (12h): Supported 00:07:57.870 Copy (19h): Supported LBA-Change 00:07:57.870 Unknown (1Dh): Supported LBA-Change 00:07:57.870 00:07:57.870 Error Log 00:07:57.870 ========= 00:07:57.870 00:07:57.870 Arbitration 00:07:57.870 =========== 00:07:57.870 Arbitration Burst: no limit 00:07:57.870 00:07:57.870 Power Management 00:07:57.870 ================ 00:07:57.870 Number of Power States: 1 00:07:57.870 Current Power State: Power State #0 00:07:57.870 Power State #0: 00:07:57.870 Max Power: 25.00 W 00:07:57.870 Non-Operational State: Operational 00:07:57.870 Entry Latency: 16 microseconds 00:07:57.870 Exit Latency: 4 microseconds 00:07:57.870 Relative Read Throughput: 0 00:07:57.870 Relative Read Latency: 0 00:07:57.870 Relative Write Throughput: 0 00:07:57.870 Relative Write Latency: 0 00:07:57.870 Idle Power: Not Reported 00:07:57.870 Active Power: Not Reported 00:07:57.870 Non-Operational Permissive Mode: Not Supported 00:07:57.870 00:07:57.870 Health Information 00:07:57.870 ================== 00:07:57.870 Critical Warnings: 00:07:57.870 Available Spare Space: OK 00:07:57.870 Temperature: OK 00:07:57.870 Device Reliability: OK 00:07:57.870 Read Only: No 00:07:57.870 Volatile Memory Backup: OK 00:07:57.870 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.870 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.870 Available Spare: 0% 00:07:57.870 Available Spare Threshold: 0% 00:07:57.870 Life Percentage Used: 0% 00:07:57.870 Data Units Read: 634 00:07:57.870 Data Units Written: 562 00:07:57.870 Host Read Commands: 31974 00:07:57.870 Host Write Commands: 31760 00:07:57.870 Controller Busy Time: 0 minutes 00:07:57.870 Power Cycles: 0 00:07:57.870 Power On Hours: 0 hours 00:07:57.870 Unsafe Shutdowns: 0 00:07:57.870 Unrecoverable Media Errors: 0 00:07:57.870 Lifetime Error Log Entries: 0 00:07:57.870 Warning Temperature Time: 0 minutes 00:07:57.870 Critical Temperature Time: 0 minutes 00:07:57.870 00:07:57.870 Number of Queues 00:07:57.870 ================ 00:07:57.870 Number of I/O Submission Queues: 64 00:07:57.870 Number of I/O Completion Queues: 64 00:07:57.870 00:07:57.870 ZNS Specific Controller Data 00:07:57.870 ============================ 00:07:57.870 Zone Append Size Limit: 0 00:07:57.870 00:07:57.870 00:07:57.870 Active Namespaces 00:07:57.870 ================= 00:07:57.870 Namespace ID:1 00:07:57.870 Error Recovery Timeout: Unlimited 00:07:57.870 Command Set Identifier: NVM (00h) 00:07:57.870 Deallocate: Supported 00:07:57.870 Deallocated/Unwritten Error: Supported 00:07:57.870 Deallocated Read Value: All 0x00 00:07:57.870 Deallocate in Write Zeroes: Not Supported 00:07:57.870 Deallocated Guard Field: 0xFFFF 00:07:57.870 Flush: Supported 00:07:57.870 Reservation: Not Supported 00:07:57.870 Metadata Transferred as: Separate Metadata Buffer 00:07:57.870 Namespace Sharing Capabilities: Private 00:07:57.870 Size (in LBAs): 1548666 (5GiB) 00:07:57.870 Capacity (in LBAs): 1548666 (5GiB) 00:07:57.870 Utilization (in LBAs): 1548666 (5GiB) 00:07:57.870 Thin Provisioning: Not Supported 00:07:57.870 Per-NS Atomic Units: No 00:07:57.870 Maximum Single Source Range Length: 128 00:07:57.870 Maximum Copy Length: 128 00:07:57.870 Maximum Source Range Count: 128 00:07:57.870 NGUID/EUI64 Never Reused: No 00:07:57.870 Namespace Write Protected: No 00:07:57.870 Number of LBA Formats: 8 00:07:57.870 Current LBA Format: LBA Format #07 00:07:57.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.870 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.870 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.870 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.870 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.870 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.870 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.871 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.871 00:07:57.871 NVM Specific Namespace Data 00:07:57.871 =========================== 00:07:57.871 Logical Block Storage Tag Mask: 0 00:07:57.871 Protection Information Capabilities: 00:07:57.871 16b Guard Protection Information Storage Tag Support: No 00:07:57.871 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.871 Storage Tag Check Read Support: No 00:07:57.871 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.871 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.871 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.871 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.871 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.871 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.871 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.871 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.871 ===================================================== 00:07:57.871 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:57.871 ===================================================== 00:07:57.871 Controller Capabilities/Features 00:07:57.871 ================================ 00:07:57.871 Vendor ID: 1b36 00:07:57.871 Subsystem Vendor ID: 1af4 00:07:57.871 Serial Number: 12341 00:07:57.871 Model Number: QEMU NVMe Ctrl 00:07:57.871 Firmware Version: 8.0.0 00:07:57.871 Recommended Arb Burst: 6 00:07:57.871 IEEE OUI Identifier: 00 54 52 00:07:57.871 Multi-path I/O 00:07:57.871 May have multiple subsystem ports: No 00:07:57.871 May have multiple controllers: No 00:07:57.871 Associated with SR-IOV VF: No 00:07:57.871 Max Data Transfer Size: 524288 00:07:57.871 Max Number of Namespaces: 256 00:07:57.871 Max Number of I/O Queues: 64 00:07:57.871 NVMe Specification Version (VS): 1.4 00:07:57.871 NVMe Specification Version (Identify): 1.4 00:07:57.871 Maximum Queue Entries: 2048 00:07:57.871 Contiguous Queues Required: Yes 00:07:57.871 Arbitration Mechanisms Supported 00:07:57.871 Weighted Round Robin: Not Supported 00:07:57.871 Vendor Specific: Not Supported 00:07:57.871 Reset Timeout: 7500 ms 00:07:57.871 Doorbell Stride: 4 bytes 00:07:57.871 NVM Subsystem Reset: Not Supported 00:07:57.871 Command Sets Supported 00:07:57.871 NVM Command Set: Supported 00:07:57.871 Boot Partition: Not Supported 00:07:57.871 Memory Page Size Minimum: 4096 bytes 00:07:57.871 Memory Page Size Maximum: 65536 bytes 00:07:57.871 Persistent Memory Region: Not Supported 00:07:57.871 Optional Asynchronous Events Supported 00:07:57.871 Namespace Attribute Notices: Supported 00:07:57.871 Firmware Activation Notices: Not Supported 00:07:57.871 ANA Change Notices: Not Supported 00:07:57.871 PLE Aggregate Log Change Notices: Not Supported 00:07:57.871 LBA Status Info Alert Notices: Not Supported 00:07:57.871 EGE Aggregate Log Change Notices: Not Supported 00:07:57.871 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.871 Zone Descriptor Change Notices: Not Supported 00:07:57.871 Discovery Log Change Notices: Not Supported 00:07:57.871 Controller Attributes 00:07:57.871 128-bit Host Identifier: Not Supported 00:07:57.871 Non-Operational Permissive Mode: Not Supported 00:07:57.871 NVM Sets: Not Supported 00:07:57.871 Read Recovery Levels: Not Supported 00:07:57.871 Endurance Groups: Not Supported 00:07:57.871 Predictable Latency Mode: Not Supported 00:07:57.871 Traffic Based Keep ALive: Not Supported 00:07:57.871 Namespace Granularity: Not Supported 00:07:57.871 SQ Associations: Not Supported 00:07:57.871 UUID List: Not Supported 00:07:57.871 Multi-Domain Subsystem: Not Supported 00:07:57.871 Fixed Capacity Management: Not Supported 00:07:57.871 Variable Capacity Management: Not Supported 00:07:57.871 Delete Endurance Group: Not Supported 00:07:57.871 Delete NVM Set: Not Supported 00:07:57.871 Extended LBA Formats Supported: Supported 00:07:57.871 Flexible Data Placement Supported: Not Supported 00:07:57.871 00:07:57.871 Controller Memory Buffer Support 00:07:57.871 ================================ 00:07:57.871 Supported: No 00:07:57.871 00:07:57.871 Persistent Memory Region Support 00:07:57.871 ================================ 00:07:57.871 Supported: No 00:07:57.871 00:07:57.871 Admin Command Set Attributes 00:07:57.871 ============================ 00:07:57.871 Security Send/Receive: Not Supported 00:07:57.871 Format NVM: Supported 00:07:57.871 Firmware Activate/Download: Not Supported 00:07:57.871 Namespace Management: Supported 00:07:57.871 Device Self-Test: Not Supported 00:07:57.871 Directives: Supported 00:07:57.871 NVMe-MI: Not Supported 00:07:57.871 Virtualization Management: Not Supported 00:07:57.871 Doorbell Buffer Config: Supported 00:07:57.871 Get LBA Status Capability: Not Supported 00:07:57.871 Command & Feature Lockdown Capability: Not Supported 00:07:57.871 Abort Command Limit: 4 00:07:57.871 Async Event Request Limit: 4 00:07:57.871 Number of Firmware Slots: N/A 00:07:57.871 Firmware Slot 1 Read-Only: N/A 00:07:57.871 Firmware Activation Without Reset: N/A 00:07:57.871 Multiple Update Detection Support: N/A 00:07:57.871 Firmware Update Granularity: No Information Provided 00:07:57.871 Per-Namespace SMART Log: Yes 00:07:57.871 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.871 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:57.871 Command Effects Log Page: Supported 00:07:57.871 Get Log Page Extended Data: Supported 00:07:57.871 Telemetry Log Pages: Not Supported 00:07:57.871 Persistent Event Log Pages: Not Supported 00:07:57.871 Supported Log Pages Log Page: May Support 00:07:57.871 Commands Supported & Effects Log Page: Not Supported 00:07:57.871 Feature Identifiers & Effects Log Page:May Support 00:07:57.871 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.871 Data Area 4 for Telemetry Log: Not Supported 00:07:57.871 Error Log Page Entries Supported: 1 00:07:57.871 Keep Alive: Not Supported 00:07:57.871 00:07:57.871 NVM Command Set Attributes 00:07:57.871 ========================== 00:07:57.871 Submission Queue Entry Size 00:07:57.871 Max: 64 00:07:57.871 Min: 64 00:07:57.871 Completion Queue Entry Size 00:07:57.871 Max: 16 00:07:57.871 Min: 16 00:07:57.871 Number of Namespaces: 256 00:07:57.871 Compare Command: Supported 00:07:57.871 Write Uncorrectable Command: Not Supported 00:07:57.871 Dataset Management Command: Supported 00:07:57.871 Write Zeroes Command: Supported 00:07:57.871 Set Features Save Field: Supported 00:07:57.871 Reservations: Not Supported 00:07:57.871 Timestamp: Supported 00:07:57.871 Copy: Supported 00:07:57.871 Volatile Write Cache: Present 00:07:57.871 Atomic Write Unit (Normal): 1 00:07:57.871 Atomic Write Unit (PFail): 1 00:07:57.871 Atomic Compare & Write Unit: 1 00:07:57.871 Fused Compare & Write: Not Supported 00:07:57.871 Scatter-Gather List 00:07:57.871 SGL Command Set: Supported 00:07:57.871 SGL Keyed: Not Supported 00:07:57.871 SGL Bit Bucket Descriptor: Not Supported 00:07:57.871 SGL Metadata Pointer: Not Supported 00:07:57.871 Oversized SGL: Not Supported 00:07:57.871 SGL Metadata Address: Not Supported 00:07:57.871 SGL Offset: Not Supported 00:07:57.871 Transport SGL Data Block: Not Supported 00:07:57.871 Replay Protected Memory Block: Not Supported 00:07:57.871 00:07:57.871 Firmware Slot Information 00:07:57.871 ========================= 00:07:57.871 Active slot: 1 00:07:57.871 Slot 1 Firmware Revision: 1.0 00:07:57.871 00:07:57.871 00:07:57.871 Commands Supported and Effects 00:07:57.871 ============================== 00:07:57.871 Admin Commands 00:07:57.871 -------------- 00:07:57.871 Delete I/O Submission Queue (00h): Supported 00:07:57.871 Create I/O Submission Queue (01h): Supported 00:07:57.871 Get Log Page (02h): Supported 00:07:57.871 Delete I/O Completion Queue (04h): Supported 00:07:57.871 Create I/O Completion Queue (05h): Supported 00:07:57.871 Identify (06h): Supported 00:07:57.871 Abort (08h): Supported 00:07:57.871 Set Features (09h): Supported 00:07:57.871 Get Features (0Ah): Supported 00:07:57.871 Asynchronous Event Request (0Ch): Supported 00:07:57.871 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.871 Directive Send (19h): Supported 00:07:57.871 Directive Receive (1Ah): Supported 00:07:57.871 Virtualization Management (1Ch): Supported 00:07:57.871 Doorbell Buffer Config (7Ch): Supported 00:07:57.871 Format NVM (80h): Supported LBA-Change 00:07:57.871 I/O Commands 00:07:57.871 ------------ 00:07:57.871 Flush (00h): Supported LBA-Change 00:07:57.871 Write (01h): Supported LBA-Change 00:07:57.871 Read (02h): Supported 00:07:57.871 Compare (05h): Supported 00:07:57.871 Write Zeroes (08h): Supported LBA-Change 00:07:57.871 Dataset Management (09h): Supported LBA-Change 00:07:57.871 Unknown (0Ch): Supported 00:07:57.871 Unknown (12h): Supported 00:07:57.871 Copy (19h): Supported LBA-Change 00:07:57.871 Unknown (1Dh): Supported LBA-Change 00:07:57.871 00:07:57.871 Error Log 00:07:57.871 ========= 00:07:57.871 00:07:57.871 Arbitration 00:07:57.872 =========== 00:07:57.872 Arbitration Burst: no limit 00:07:57.872 00:07:57.872 Power Management 00:07:57.872 ================ 00:07:57.872 Number of Power States: 1 00:07:57.872 Current Power State: Power State #0 00:07:57.872 Power State #0: 00:07:57.872 Max Power: 25.00 W 00:07:57.872 Non-Operational State: Operational 00:07:57.872 Entry Latency: 16 microseconds 00:07:57.872 Exit Latency: 4 microseconds 00:07:57.872 Relative Read Throughput: 0 00:07:57.872 Relative Read Latency: 0 00:07:57.872 Relative Write Throughput: 0 00:07:57.872 Relative Write Latency: 0 00:07:57.872 Idle Power: Not Reported 00:07:57.872 Active Power: Not Reported 00:07:57.872 Non-Operational Permissive Mode: Not Supported 00:07:57.872 00:07:57.872 Health Information 00:07:57.872 ================== 00:07:57.872 Critical Warnings: 00:07:57.872 Available Spare Space: OK 00:07:57.872 Temperature: OK 00:07:57.872 Device Reliability: OK 00:07:57.872 Read Only: No 00:07:57.872 Volatile Memory Backup: OK 00:07:57.872 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.872 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.872 Available Spare: 0% 00:07:57.872 Available Spare Threshold: 0% 00:07:57.872 Life Percentage Used: 0% 00:07:57.872 Data Units Read: 954 00:07:57.872 Data Units Written: 822 00:07:57.872 Host Read Commands: 47360 00:07:57.872 Host Write Commands: 46149 00:07:57.872 Controller Busy Time: 0 minutes 00:07:57.872 Power Cycles: 0 00:07:57.872 Power On Hours: 0 hours 00:07:57.872 Unsafe Shutdowns: 0 00:07:57.872 Unrecoverable Media Errors: 0 00:07:57.872 Lifetime Error Log Entries: 0 00:07:57.872 Warning Temperature Time: 0 minutes 00:07:57.872 Critical Temperature Time: 0 minutes 00:07:57.872 00:07:57.872 Number of Queues 00:07:57.872 ================ 00:07:57.872 Number of I/O Submission Queues: 64 00:07:57.872 Number of I/O Completion Queues: 64 00:07:57.872 00:07:57.872 ZNS Specific Controller Data 00:07:57.872 ============================ 00:07:57.872 Zone Append Size Limit: 0 00:07:57.872 00:07:57.872 00:07:57.872 Active Namespaces 00:07:57.872 ================= 00:07:57.872 Namespace ID:1 00:07:57.872 Error Recovery Timeout: Unlimited 00:07:57.872 Command Set Identifier: NVM (00h) 00:07:57.872 Deallocate: Supported 00:07:57.872 Deallocated/Unwritten Error: Supported 00:07:57.872 Deallocated Read Value: All 0x00 00:07:57.872 Deallocate in Write Zeroes: Not Supported 00:07:57.872 Deallocated Guard Field: 0xFFFF 00:07:57.872 Flush: Supported 00:07:57.872 Reservation: Not Supported 00:07:57.872 Namespace Sharing Capabilities: Private 00:07:57.872 Size (in LBAs): 1310720 (5GiB) 00:07:57.872 Capacity (in LBAs): 1310720 (5GiB) 00:07:57.872 Utilization (in LBAs): 1310720 (5GiB) 00:07:57.872 Thin Provisioning: Not Supported 00:07:57.872 Per-NS Atomic Units: No 00:07:57.872 Maximum Single Source Range Length: 128 00:07:57.872 Maximum Copy Length: 128 00:07:57.872 Maximum Source Range Count: 128 00:07:57.872 NGUID/EUI64 Never Reused: No 00:07:57.872 Namespace Write Protected: No 00:07:57.872 Number of LBA Formats: 8 00:07:57.872 Current LBA Format: LBA Format #04 00:07:57.872 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.872 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.872 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.872 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.872 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.872 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.872 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.872 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.872 00:07:57.872 NVM Specific Namespace Data 00:07:57.872 =========================== 00:07:57.872 Logical Block Storage Tag Mask: 0 00:07:57.872 Protection Information Capabilities: 00:07:57.872 16b Guard Protection Information Storage Tag Support: No 00:07:57.872 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.872 Storage Tag Check Read Support: No 00:07:57.872 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.872 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.872 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.872 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.872 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.872 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.872 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.872 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.872 ===================================================== 00:07:57.872 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:57.872 ===================================================== 00:07:57.872 Controller Capabilities/Features 00:07:57.872 ================================ 00:07:57.872 Vendor ID: 1b36 00:07:57.872 Subsystem Vendor ID: 1af4 00:07:57.872 Serial Number: 12343 00:07:57.872 Model Number: QEMU NVMe Ctrl 00:07:57.872 Firmware Version: 8.0.0 00:07:57.872 Recommended Arb Burst: 6 00:07:57.872 IEEE OUI Identifier: 00 54 52 00:07:57.872 Mul[2024-08-11 12:49:49.414170] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 74129 terminated unexpected 00:07:57.872 [2024-08-11 12:49:49.415566] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 74129 terminated unexpected 00:07:57.872 [2024-08-11 12:49:49.416454] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 74129 terminated unexpected 00:07:57.872 ti-path I/O 00:07:57.872 May have multiple subsystem ports: No 00:07:57.872 May have multiple controllers: Yes 00:07:57.872 Associated with SR-IOV VF: No 00:07:57.872 Max Data Transfer Size: 524288 00:07:57.872 Max Number of Namespaces: 256 00:07:57.872 Max Number of I/O Queues: 64 00:07:57.872 NVMe Specification Version (VS): 1.4 00:07:57.872 NVMe Specification Version (Identify): 1.4 00:07:57.872 Maximum Queue Entries: 2048 00:07:57.872 Contiguous Queues Required: Yes 00:07:57.872 Arbitration Mechanisms Supported 00:07:57.872 Weighted Round Robin: Not Supported 00:07:57.872 Vendor Specific: Not Supported 00:07:57.872 Reset Timeout: 7500 ms 00:07:57.872 Doorbell Stride: 4 bytes 00:07:57.872 NVM Subsystem Reset: Not Supported 00:07:57.872 Command Sets Supported 00:07:57.872 NVM Command Set: Supported 00:07:57.872 Boot Partition: Not Supported 00:07:57.872 Memory Page Size Minimum: 4096 bytes 00:07:57.872 Memory Page Size Maximum: 65536 bytes 00:07:57.872 Persistent Memory Region: Not Supported 00:07:57.872 Optional Asynchronous Events Supported 00:07:57.872 Namespace Attribute Notices: Supported 00:07:57.872 Firmware Activation Notices: Not Supported 00:07:57.872 ANA Change Notices: Not Supported 00:07:57.872 PLE Aggregate Log Change Notices: Not Supported 00:07:57.872 LBA Status Info Alert Notices: Not Supported 00:07:57.872 EGE Aggregate Log Change Notices: Not Supported 00:07:57.872 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.872 Zone Descriptor Change Notices: Not Supported 00:07:57.872 Discovery Log Change Notices: Not Supported 00:07:57.872 Controller Attributes 00:07:57.872 128-bit Host Identifier: Not Supported 00:07:57.872 Non-Operational Permissive Mode: Not Supported 00:07:57.872 NVM Sets: Not Supported 00:07:57.872 Read Recovery Levels: Not Supported 00:07:57.872 Endurance Groups: Supported 00:07:57.872 Predictable Latency Mode: Not Supported 00:07:57.872 Traffic Based Keep ALive: Not Supported 00:07:57.872 Namespace Granularity: Not Supported 00:07:57.872 SQ Associations: Not Supported 00:07:57.872 UUID List: Not Supported 00:07:57.872 Multi-Domain Subsystem: Not Supported 00:07:57.872 Fixed Capacity Management: Not Supported 00:07:57.872 Variable Capacity Management: Not Supported 00:07:57.872 Delete Endurance Group: Not Supported 00:07:57.872 Delete NVM Set: Not Supported 00:07:57.872 Extended LBA Formats Supported: Supported 00:07:57.872 Flexible Data Placement Supported: Supported 00:07:57.872 00:07:57.872 Controller Memory Buffer Support 00:07:57.872 ================================ 00:07:57.872 Supported: No 00:07:57.872 00:07:57.872 Persistent Memory Region Support 00:07:57.872 ================================ 00:07:57.872 Supported: No 00:07:57.872 00:07:57.872 Admin Command Set Attributes 00:07:57.872 ============================ 00:07:57.872 Security Send/Receive: Not Supported 00:07:57.872 Format NVM: Supported 00:07:57.872 Firmware Activate/Download: Not Supported 00:07:57.872 Namespace Management: Supported 00:07:57.872 Device Self-Test: Not Supported 00:07:57.872 Directives: Supported 00:07:57.872 NVMe-MI: Not Supported 00:07:57.872 Virtualization Management: Not Supported 00:07:57.872 Doorbell Buffer Config: Supported 00:07:57.872 Get LBA Status Capability: Not Supported 00:07:57.872 Command & Feature Lockdown Capability: Not Supported 00:07:57.872 Abort Command Limit: 4 00:07:57.872 Async Event Request Limit: 4 00:07:57.872 Number of Firmware Slots: N/A 00:07:57.872 Firmware Slot 1 Read-Only: N/A 00:07:57.872 Firmware Activation Without Reset: N/A 00:07:57.872 Multiple Update Detection Support: N/A 00:07:57.872 Firmware Update Granularity: No Information Provided 00:07:57.873 Per-Namespace SMART Log: Yes 00:07:57.873 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.873 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:57.873 Command Effects Log Page: Supported 00:07:57.873 Get Log Page Extended Data: Supported 00:07:57.873 Telemetry Log Pages: Not Supported 00:07:57.873 Persistent Event Log Pages: Not Supported 00:07:57.873 Supported Log Pages Log Page: May Support 00:07:57.873 Commands Supported & Effects Log Page: Not Supported 00:07:57.873 Feature Identifiers & Effects Log Page:May Support 00:07:57.873 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.873 Data Area 4 for Telemetry Log: Not Supported 00:07:57.873 Error Log Page Entries Supported: 1 00:07:57.873 Keep Alive: Not Supported 00:07:57.873 00:07:57.873 NVM Command Set Attributes 00:07:57.873 ========================== 00:07:57.873 Submission Queue Entry Size 00:07:57.873 Max: 64 00:07:57.873 Min: 64 00:07:57.873 Completion Queue Entry Size 00:07:57.873 Max: 16 00:07:57.873 Min: 16 00:07:57.873 Number of Namespaces: 256 00:07:57.873 Compare Command: Supported 00:07:57.873 Write Uncorrectable Command: Not Supported 00:07:57.873 Dataset Management Command: Supported 00:07:57.873 Write Zeroes Command: Supported 00:07:57.873 Set Features Save Field: Supported 00:07:57.873 Reservations: Not Supported 00:07:57.873 Timestamp: Supported 00:07:57.873 Copy: Supported 00:07:57.873 Volatile Write Cache: Present 00:07:57.873 Atomic Write Unit (Normal): 1 00:07:57.873 Atomic Write Unit (PFail): 1 00:07:57.873 Atomic Compare & Write Unit: 1 00:07:57.873 Fused Compare & Write: Not Supported 00:07:57.873 Scatter-Gather List 00:07:57.873 SGL Command Set: Supported 00:07:57.873 SGL Keyed: Not Supported 00:07:57.873 SGL Bit Bucket Descriptor: Not Supported 00:07:57.873 SGL Metadata Pointer: Not Supported 00:07:57.873 Oversized SGL: Not Supported 00:07:57.873 SGL Metadata Address: Not Supported 00:07:57.873 SGL Offset: Not Supported 00:07:57.873 Transport SGL Data Block: Not Supported 00:07:57.873 Replay Protected Memory Block: Not Supported 00:07:57.873 00:07:57.873 Firmware Slot Information 00:07:57.873 ========================= 00:07:57.873 Active slot: 1 00:07:57.873 Slot 1 Firmware Revision: 1.0 00:07:57.873 00:07:57.873 00:07:57.873 Commands Supported and Effects 00:07:57.873 ============================== 00:07:57.873 Admin Commands 00:07:57.873 -------------- 00:07:57.873 Delete I/O Submission Queue (00h): Supported 00:07:57.873 Create I/O Submission Queue (01h): Supported 00:07:57.873 Get Log Page (02h): Supported 00:07:57.873 Delete I/O Completion Queue (04h): Supported 00:07:57.873 Create I/O Completion Queue (05h): Supported 00:07:57.873 Identify (06h): Supported 00:07:57.873 Abort (08h): Supported 00:07:57.873 Set Features (09h): Supported 00:07:57.873 Get Features (0Ah): Supported 00:07:57.873 Asynchronous Event Request (0Ch): Supported 00:07:57.873 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.873 Directive Send (19h): Supported 00:07:57.873 Directive Receive (1Ah): Supported 00:07:57.873 Virtualization Management (1Ch): Supported 00:07:57.873 Doorbell Buffer Config (7Ch): Supported 00:07:57.873 Format NVM (80h): Supported LBA-Change 00:07:57.873 I/O Commands 00:07:57.873 ------------ 00:07:57.873 Flush (00h): Supported LBA-Change 00:07:57.873 Write (01h): Supported LBA-Change 00:07:57.873 Read (02h): Supported 00:07:57.873 Compare (05h): Supported 00:07:57.873 Write Zeroes (08h): Supported LBA-Change 00:07:57.873 Dataset Management (09h): Supported LBA-Change 00:07:57.873 Unknown (0Ch): Supported 00:07:57.873 Unknown (12h): Supported 00:07:57.873 Copy (19h): Supported LBA-Change 00:07:57.873 Unknown (1Dh): Supported LBA-Change 00:07:57.873 00:07:57.873 Error Log 00:07:57.873 ========= 00:07:57.873 00:07:57.873 Arbitration 00:07:57.873 =========== 00:07:57.873 Arbitration Burst: no limit 00:07:57.873 00:07:57.873 Power Management 00:07:57.873 ================ 00:07:57.873 Number of Power States: 1 00:07:57.873 Current Power State: Power State #0 00:07:57.873 Power State #0: 00:07:57.873 Max Power: 25.00 W 00:07:57.873 Non-Operational State: Operational 00:07:57.873 Entry Latency: 16 microseconds 00:07:57.873 Exit Latency: 4 microseconds 00:07:57.873 Relative Read Throughput: 0 00:07:57.873 Relative Read Latency: 0 00:07:57.873 Relative Write Throughput: 0 00:07:57.873 Relative Write Latency: 0 00:07:57.873 Idle Power: Not Reported 00:07:57.873 Active Power: Not Reported 00:07:57.873 Non-Operational Permissive Mode: Not Supported 00:07:57.873 00:07:57.873 Health Information 00:07:57.873 ================== 00:07:57.873 Critical Warnings: 00:07:57.873 Available Spare Space: OK 00:07:57.873 Temperature: OK 00:07:57.873 Device Reliability: OK 00:07:57.873 Read Only: No 00:07:57.873 Volatile Memory Backup: OK 00:07:57.873 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.873 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.873 Available Spare: 0% 00:07:57.873 Available Spare Threshold: 0% 00:07:57.873 Life Percentage Used: 0% 00:07:57.873 Data Units Read: 732 00:07:57.873 Data Units Written: 662 00:07:57.873 Host Read Commands: 33340 00:07:57.873 Host Write Commands: 32764 00:07:57.873 Controller Busy Time: 0 minutes 00:07:57.873 Power Cycles: 0 00:07:57.873 Power On Hours: 0 hours 00:07:57.873 Unsafe Shutdowns: 0 00:07:57.873 Unrecoverable Media Errors: 0 00:07:57.873 Lifetime Error Log Entries: 0 00:07:57.873 Warning Temperature Time: 0 minutes 00:07:57.873 Critical Temperature Time: 0 minutes 00:07:57.873 00:07:57.873 Number of Queues 00:07:57.873 ================ 00:07:57.873 Number of I/O Submission Queues: 64 00:07:57.873 Number of I/O Completion Queues: 64 00:07:57.873 00:07:57.873 ZNS Specific Controller Data 00:07:57.873 ============================ 00:07:57.873 Zone Append Size Limit: 0 00:07:57.873 00:07:57.873 00:07:57.873 Active Namespaces 00:07:57.873 ================= 00:07:57.873 Namespace ID:1 00:07:57.873 Error Recovery Timeout: Unlimited 00:07:57.873 Command Set Identifier: NVM (00h) 00:07:57.873 Deallocate: Supported 00:07:57.873 Deallocated/Unwritten Error: Supported 00:07:57.873 Deallocated Read Value: All 0x00 00:07:57.873 Deallocate in Write Zeroes: Not Supported 00:07:57.873 Deallocated Guard Field: 0xFFFF 00:07:57.873 Flush: Supported 00:07:57.873 Reservation: Not Supported 00:07:57.873 Namespace Sharing Capabilities: Multiple Controllers 00:07:57.873 Size (in LBAs): 262144 (1GiB) 00:07:57.873 Capacity (in LBAs): 262144 (1GiB) 00:07:57.873 Utilization (in LBAs): 262144 (1GiB) 00:07:57.873 Thin Provisioning: Not Supported 00:07:57.873 Per-NS Atomic Units: No 00:07:57.873 Maximum Single Source Range Length: 128 00:07:57.873 Maximum Copy Length: 128 00:07:57.873 Maximum Source Range Count: 128 00:07:57.873 NGUID/EUI64 Never Reused: No 00:07:57.873 Namespace Write Protected: No 00:07:57.873 Endurance group ID: 1 00:07:57.873 Number of LBA Formats: 8 00:07:57.873 Current LBA Format: LBA Format #04 00:07:57.873 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.873 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.873 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.873 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.873 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.873 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.873 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.873 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.873 00:07:57.873 Get Feature FDP: 00:07:57.873 ================ 00:07:57.873 Enabled: Yes 00:07:57.873 FDP configuration index: 0 00:07:57.873 00:07:57.873 FDP configurations log page 00:07:57.873 =========================== 00:07:57.873 Number of FDP configurations: 1 00:07:57.873 Version: 0 00:07:57.873 Size: 112 00:07:57.873 FDP Configuration Descriptor: 0 00:07:57.873 Descriptor Size: 96 00:07:57.873 Reclaim Group Identifier format: 2 00:07:57.873 FDP Volatile Write Cache: Not Present 00:07:57.873 FDP Configuration: Valid 00:07:57.873 Vendor Specific Size: 0 00:07:57.873 Number of Reclaim Groups: 2 00:07:57.873 Number of Recalim Unit Handles: 8 00:07:57.873 Max Placement Identifiers: 128 00:07:57.873 Number of Namespaces Suppprted: 256 00:07:57.873 Reclaim unit Nominal Size: 6000000 bytes 00:07:57.873 Estimated Reclaim Unit Time Limit: Not Reported 00:07:57.873 RUH Desc #000: RUH Type: Initially Isolated 00:07:57.873 RUH Desc #001: RUH Type: Initially Isolated 00:07:57.873 RUH Desc #002: RUH Type: Initially Isolated 00:07:57.873 RUH Desc #003: RUH Type: Initially Isolated 00:07:57.873 RUH Desc #004: RUH Type: Initially Isolated 00:07:57.873 RUH Desc #005: RUH Type: Initially Isolated 00:07:57.873 RUH Desc #006: RUH Type: Initially Isolated 00:07:57.873 RUH Desc #007: RUH Type: Initially Isolated 00:07:57.873 00:07:57.873 FDP reclaim unit handle usage log page 00:07:57.873 ====================================== 00:07:57.873 Number of Reclaim Unit Handles: 8 00:07:57.873 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:57.874 RUH Usage Desc #001: RUH Attributes: Unused 00:07:57.874 RUH Usage Desc #002: RUH Attributes: Unused 00:07:57.874 RUH Usage Desc #003: RUH Attributes: Unused 00:07:57.874 RUH Usage Desc #004: RUH Attributes: Unused 00:07:57.874 RUH Usage Desc #005: RUH Attributes: Unused 00:07:57.874 RUH Usage Desc #006: RUH Attributes: Unused 00:07:57.874 RUH Usage Desc #007: RUH Attributes: Unused 00:07:57.874 00:07:57.874 FDP statistics log page 00:07:57.874 ======================= 00:07:57.874 Host bytes with metadata written: 422617088 00:07:57.874 Media bytes with metadata written: 422641664 00:07:57.874 Media bytes erased: 0 00:07:57.874 00:07:57.874 FDP events log page 00:07:57.874 =================== 00:07:57.874 Number of FDP events: 0 00:07:57.874 00:07:57.874 NVM Specific Namespace Data 00:07:57.874 =========================== 00:07:57.874 Logical Block Storage Tag Mask: 0 00:07:57.874 Protection Information Capabilities: 00:07:57.874 16b Guard Protection Information Storage Tag Support: No 00:07:57.874 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.874 Storage Tag Check Read Support: No 00:07:57.874 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.874 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.874 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.874 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.874 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.874 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.874 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.874 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.874 ===================================================== 00:07:57.874 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:57.874 ===================================================== 00:07:57.874 Controller Capabilities/Features 00:07:57.874 ================================ 00:07:57.874 Vendor ID: 1b36 00:07:57.874 Subsystem Vendor ID: 1af4 00:07:57.874 Serial Number: 12342 00:07:57.874 Model Number: QEMU NVMe Ctrl 00:07:57.874 Firmware Version: 8.0.0 00:07:57.874 Recommended Arb Burst: 6 00:07:57.874 IEEE OUI Identifier: 00 54 52 00:07:57.874 Multi-path I/O 00:07:57.874 May have multiple subsystem ports: No 00:07:57.874 May have multiple controllers: No 00:07:57.874 Associated with SR-IOV VF: No 00:07:57.874 Max Data Transfer Size: 524288 00:07:57.874 Max Number of Namespaces: 256 00:07:57.874 Max Number of I/O Queues: 64 00:07:57.874 NVMe Specification Version (VS): 1.4 00:07:57.874 NVMe Specification Version (Identify): 1.4 00:07:57.874 Maximum Queue Entries: 2048 00:07:57.874 Contiguous Queues Required: Yes 00:07:57.874 Arbitration Mechanisms Supported 00:07:57.874 Weighted Round Robin: Not Supported 00:07:57.874 Vendor Specific: Not Supported 00:07:57.874 Reset Timeout: 7500 ms 00:07:57.874 Doorbell Stride: 4 bytes 00:07:57.874 NVM Subsystem Reset: Not Supported 00:07:57.874 Command Sets Supported 00:07:57.874 NVM Command Set: Supported 00:07:57.874 Boot Partition: Not Supported 00:07:57.874 Memory Page Size Minimum: 4096 bytes 00:07:57.874 Memory Page Size Maximum: 65536 bytes 00:07:57.874 Persistent Memory Region: Not Supported 00:07:57.874 Optional Asynchronous Events Supported 00:07:57.874 Namespace Attribute Notices: Supported 00:07:57.874 Firmware Activation Notices: Not Supported 00:07:57.874 ANA Change Notices: Not Supported 00:07:57.874 PLE Aggregate Log Change Notices: Not Supported 00:07:57.874 LBA Status Info Alert Notices: Not Supported 00:07:57.874 EGE Aggregate Log Change Notices: Not Supported 00:07:57.874 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.874 Zone Descriptor Change Notices: Not Supported 00:07:57.874 Discovery Log Change Notices: Not Supported 00:07:57.874 Controller Attributes 00:07:57.874 128-bit Host Identifier: Not Supported 00:07:57.874 Non-Operational Permissive Mode: Not Supported 00:07:57.874 NVM Sets: Not Supported 00:07:57.874 Read Recovery Levels: Not Supported 00:07:57.874 Endurance Groups: Not Supported 00:07:57.874 Predictable Latency Mode: Not Supported 00:07:57.874 Traffic Based Keep ALive: Not Supported 00:07:57.874 Namespace Granularity: Not Supported 00:07:57.874 SQ Associations: Not Supported 00:07:57.874 UUID List: Not Supported 00:07:57.874 Multi-Domain Subsystem: Not Supported 00:07:57.874 Fixed Capacity Management: Not Supported 00:07:57.874 Variable Capacity Management: Not Supported 00:07:57.874 Delete Endurance Group: Not Supported 00:07:57.874 Delete NVM Set: Not Supported 00:07:57.874 Extended LBA Formats Supported: Supported 00:07:57.874 Flexible Data Placement Supported: Not Supported 00:07:57.874 00:07:57.874 Controller Memory Buffer Support 00:07:57.874 ================================ 00:07:57.874 Supported: No 00:07:57.874 00:07:57.874 Persistent Memory Region Support 00:07:57.874 ================================ 00:07:57.874 Supported: No 00:07:57.874 00:07:57.874 Admin Command Set Attributes 00:07:57.874 ============================ 00:07:57.874 Security Send/Receive: Not Supported 00:07:57.874 Format NVM: Supported 00:07:57.874 Firmware Activate/Download: Not Supported 00:07:57.874 Namespace Management: Supported 00:07:57.874 Device Self-Test: Not Supported 00:07:57.874 Directives: Supported 00:07:57.874 NVMe-MI: Not Supported 00:07:57.874 Virtualization Management: Not Supported 00:07:57.874 Doorbell Buffer Config: Supported 00:07:57.874 Get LBA Status Capability: Not Supported 00:07:57.874 Command & Feature Lockdown Capability: Not Supported 00:07:57.874 Abort Command Limit: 4 00:07:57.874 Async Event Request Limit: 4 00:07:57.874 Number of Firmware Slots: N/A 00:07:57.874 Firmware Slot 1 Read-Only: N/A 00:07:57.874 Firmware Activation Without Reset: N/A 00:07:57.874 Multiple Update Detection Support: N/A 00:07:57.874 Firmware Update Granularity: No Information Provided 00:07:57.874 Per-Namespace SMART Log: Yes 00:07:57.874 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.874 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:57.874 Command Effects Log Page: Supported 00:07:57.874 Get Log Page Extended Data: Supported 00:07:57.874 Telemetry Log Pages: Not Supported 00:07:57.874 Persistent Event Log Pages: Not Supported 00:07:57.874 Supported Log Pages Log Page: May Support 00:07:57.874 Commands Supported & Effects Log Page: Not Supported 00:07:57.874 Feature Identifiers & Effects Log Page:May Support 00:07:57.874 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.874 Data Area 4 for Telemetry Log: Not Supported 00:07:57.874 Error Log Page Entries Supported: 1 00:07:57.874 Keep Alive: Not Supported 00:07:57.874 00:07:57.874 NVM Command Set Attributes 00:07:57.874 ========================== 00:07:57.874 Submission Queue Entry Size 00:07:57.874 Max: 64 00:07:57.874 Min: 64 00:07:57.874 Completion Queue Entry Size 00:07:57.874 Max: 16 00:07:57.874 Min: 16 00:07:57.874 Number of Namespaces: 256 00:07:57.874 Compare Command: Supported 00:07:57.874 Write Uncorrectable Command: Not Supported 00:07:57.874 Dataset Management Command: Supported 00:07:57.874 Write Zeroes Command: Supported 00:07:57.874 Set Features Save Field: Supported 00:07:57.874 Reservations: Not Supported 00:07:57.874 Timestamp: Supported 00:07:57.874 Copy: Supported 00:07:57.874 Volatile Write Cache: Present 00:07:57.874 Atomic Write Unit (Normal): 1 00:07:57.874 Atomic Write Unit (PFail): 1 00:07:57.874 Atomic Compare & Write Unit: 1 00:07:57.874 Fused Compare & Write: Not Supported 00:07:57.874 Scatter-Gather List 00:07:57.874 SGL Command Set: Supported 00:07:57.874 SGL Keyed: Not Supported 00:07:57.874 SGL Bit Bucket Descriptor: Not Supported 00:07:57.874 SGL Metadata Pointer: Not Supported 00:07:57.874 Oversized SGL: Not Supported 00:07:57.874 SGL Metadata Address: Not Supported 00:07:57.875 SGL Offset: Not Supported 00:07:57.875 Transport SGL Data Block: Not Supported 00:07:57.875 Replay Protected Memory Block: Not Supported 00:07:57.875 00:07:57.875 Firmware Slot Information 00:07:57.875 ========================= 00:07:57.875 Active slot: 1 00:07:57.875 Slot 1 Firmware Revision: 1.0 00:07:57.875 00:07:57.875 00:07:57.875 Commands Supported and Effects 00:07:57.875 ============================== 00:07:57.875 Admin Commands 00:07:57.875 -------------- 00:07:57.875 Delete I/O Submission Queue (00h): Supported 00:07:57.875 Create I/O Submission Queue (01h): Supported 00:07:57.875 Get Log Page (02h): Supported 00:07:57.875 Delete I/O Completion Queue (04h): Supported 00:07:57.875 Create I/O Completion Queue (05h): Supported 00:07:57.875 Identify (06h): Supported 00:07:57.875 Abort (08h): Supported 00:07:57.875 Set Features (09h): Supported 00:07:57.875 Get Features (0Ah): Supported 00:07:57.875 Asynchronous Event Request (0Ch): Supported 00:07:57.875 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.875 Directive Send (19h): Supported 00:07:57.875 Directive Receive (1Ah): Supported 00:07:57.875 Virtualization Management (1Ch): Supported 00:07:57.875 Doorbell Buffer Config (7Ch): Supported 00:07:57.875 Format NVM (80h): Supported LBA-Change 00:07:57.875 I/O Commands 00:07:57.875 ------------ 00:07:57.875 Flush (00h): Supported LBA-Change 00:07:57.875 Write (01h): Supported LBA-Change 00:07:57.875 Read (02h): Supported 00:07:57.875 Compare (05h): Supported 00:07:57.875 Write Zeroes (08h): Supported LBA-Change 00:07:57.875 Dataset Management (09h): Supported LBA-Change 00:07:57.875 Unknown (0Ch): Supported 00:07:57.875 Unknown (12h): Supported 00:07:57.875 Copy (19h): Supported LBA-Change 00:07:57.875 Unknown (1Dh): Supported LBA-Change 00:07:57.875 00:07:57.875 Error Log 00:07:57.875 ========= 00:07:57.875 00:07:57.875 Arbitration 00:07:57.875 =========== 00:07:57.875 Arbitration Burst: no limit 00:07:57.875 00:07:57.875 Power Management 00:07:57.875 ================ 00:07:57.875 Number of Power States: 1 00:07:57.875 Current Power State: Power State #0 00:07:57.875 Power State #0: 00:07:57.875 Max Power: 25.00 W 00:07:57.875 Non-Operational State: Operational 00:07:57.875 Entry Latency: 16 microseconds 00:07:57.875 Exit Latency: 4 microseconds 00:07:57.875 Relative Read Throughput: 0 00:07:57.875 Relative Read Latency: 0 00:07:57.875 Relative Write Throughput: 0 00:07:57.875 Relative Write Latency: 0 00:07:57.875 Idle Power: Not Reported 00:07:57.875 Active Power: Not Reported 00:07:57.875 Non-Operational Permissive Mode: Not Supported 00:07:57.875 00:07:57.875 Health Information 00:07:57.875 ================== 00:07:57.875 Critical Warnings: 00:07:57.875 Available Spare Space: OK 00:07:57.875 Temperature: OK 00:07:57.875 Device Reliability: OK 00:07:57.875 Read Only: No 00:07:57.875 Volatile Memory Backup: OK 00:07:57.875 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.875 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.875 Available Spare: 0% 00:07:57.875 Available Spare Threshold: 0% 00:07:57.875 Life Percentage Used: 0% 00:07:57.875 Data Units Read: 1997 00:07:57.875 Data Units Written: 1784 00:07:57.875 Host Read Commands: 98398 00:07:57.875 Host Write Commands: 96667 00:07:57.875 Controller Busy Time: 0 minutes 00:07:57.875 Power Cycles: 0 00:07:57.875 Power On Hours: 0 hours 00:07:57.875 Unsafe Shutdowns: 0 00:07:57.875 Unrecoverable Media Errors: 0 00:07:57.875 Lifetime Error Log Entries: 0 00:07:57.875 Warning Temperature Time: 0 minutes 00:07:57.875 Critical Temperature Time: 0 minutes 00:07:57.875 00:07:57.875 Number of Queues 00:07:57.875 ================ 00:07:57.875 Number of I/O Submission Queues: 64 00:07:57.875 Number of I/O Completion Queues: 64 00:07:57.875 00:07:57.875 ZNS Specific Controller Data 00:07:57.875 ============================ 00:07:57.875 Zone Append Size Limit: 0 00:07:57.875 00:07:57.875 00:07:57.875 Active Namespaces 00:07:57.875 ================= 00:07:57.875 Namespace ID:1 00:07:57.875 Error Recovery Timeout: Unlimited 00:07:57.875 Command Set Identifier: NVM (00h) 00:07:57.875 Deallocate: Supported 00:07:57.875 Deallocated/Unwritten Error: Supported 00:07:57.875 Deallocated Read Value: All 0x00 00:07:57.875 Deallocate in Write Zeroes: Not Supported 00:07:57.875 Deallocated Guard Field: 0xFFFF 00:07:57.875 Flush: Supported 00:07:57.875 Reservation: Not Supported 00:07:57.875 Namespace Sharing Capabilities: Private 00:07:57.875 Size (in LBAs): 1048576 (4GiB) 00:07:57.875 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.875 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.875 Thin Provisioning: Not Supported 00:07:57.875 Per-NS Atomic Units: No 00:07:57.875 Maximum Single Source Range Length: 128 00:07:57.875 Maximum Copy Length: 128 00:07:57.875 Maximum Source Range Count: 128 00:07:57.875 NGUID/EUI64 Never Reused: No 00:07:57.875 Namespace Write Protected: No 00:07:57.875 Number of LBA Formats: 8 00:07:57.875 Current LBA Format: LBA Format #04 00:07:57.875 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.875 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.875 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.875 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.875 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.875 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.875 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.875 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.875 00:07:57.875 NVM Specific Namespace Data 00:07:57.875 =========================== 00:07:57.875 Logical Block Storage Tag Mask: 0 00:07:57.875 Protection Information Capabilities: 00:07:57.875 16b Guard Protection Information Storage Tag Support: No 00:07:57.875 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.875 Storage Tag Check Read Support: No 00:07:57.875 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Namespace ID:2 00:07:57.875 Error Recovery Timeout: Unlimited 00:07:57.875 Command Set Identifier: NVM (00h) 00:07:57.875 Deallocate: Supported 00:07:57.875 Deallocated/Unwritten Error: Supported 00:07:57.875 Deallocated Read Value: All 0x00 00:07:57.875 Deallocate in Write Zeroes: Not Supported 00:07:57.875 Deallocated Guard Field: 0xFFFF 00:07:57.875 Flush: Supported 00:07:57.875 Reservation: Not Supported 00:07:57.875 Namespace Sharing Capabilities: Private 00:07:57.875 Size (in LBAs): 1048576 (4GiB) 00:07:57.875 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.875 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.875 Thin Provisioning: Not Supported 00:07:57.875 Per-NS Atomic Units: No 00:07:57.875 Maximum Single Source Range Length: 128 00:07:57.875 Maximum Copy Length: 128 00:07:57.875 Maximum Source Range Count: 128 00:07:57.875 NGUID/EUI64 Never Reused: No 00:07:57.875 Namespace Write Protected: No 00:07:57.875 Number of LBA Formats: 8 00:07:57.875 Current LBA Format: LBA Format #04 00:07:57.875 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.875 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.875 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.875 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.875 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.875 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.875 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.875 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.875 00:07:57.875 NVM Specific Namespace Data 00:07:57.875 =========================== 00:07:57.875 Logical Block Storage Tag Mask: 0 00:07:57.875 Protection Information Capabilities: 00:07:57.875 16b Guard Protection Information Storage Tag Support: No 00:07:57.875 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.875 Storage Tag Check Read Support: No 00:07:57.875 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.875 Namespace ID:3 00:07:57.875 Error Recovery Timeout: Unlimited 00:07:57.875 Command Set Identifier: NVM (00h) 00:07:57.876 Deallocate: Supported 00:07:57.876 Deallocated/Unwritten Error: Supported 00:07:57.876 Deallocated Read Value: All 0x00 00:07:57.876 Deallocate in Write Zeroes: Not Supported 00:07:57.876 Deallocated Guard Field: 0xFFFF 00:07:57.876 Flush: Supported 00:07:57.876 Reservation: Not Supported 00:07:57.876 Namespace Sharing Capabilities: Private 00:07:57.876 Size (in LBAs): 1048576 (4GiB) 00:07:57.876 Capacity (in LBAs): [2024-08-11 12:49:49.418842] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 74129 terminated unexpected 00:07:57.876 1048576 (4GiB) 00:07:57.876 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.876 Thin Provisioning: Not Supported 00:07:57.876 Per-NS Atomic Units: No 00:07:57.876 Maximum Single Source Range Length: 128 00:07:57.876 Maximum Copy Length: 128 00:07:57.876 Maximum Source Range Count: 128 00:07:57.876 NGUID/EUI64 Never Reused: No 00:07:57.876 Namespace Write Protected: No 00:07:57.876 Number of LBA Formats: 8 00:07:57.876 Current LBA Format: LBA Format #04 00:07:57.876 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.876 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.876 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.876 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.876 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.876 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.876 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.876 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.876 00:07:57.876 NVM Specific Namespace Data 00:07:57.876 =========================== 00:07:57.876 Logical Block Storage Tag Mask: 0 00:07:57.876 Protection Information Capabilities: 00:07:57.876 16b Guard Protection Information Storage Tag Support: No 00:07:57.876 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.876 Storage Tag Check Read Support: No 00:07:57.876 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.876 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.876 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.876 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.876 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.876 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.876 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.876 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.876 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:57.876 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:58.134 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:58.134 ===================================================== 00:07:58.134 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.134 ===================================================== 00:07:58.134 Controller Capabilities/Features 00:07:58.134 ================================ 00:07:58.134 Vendor ID: 1b36 00:07:58.134 Subsystem Vendor ID: 1af4 00:07:58.134 Serial Number: 12340 00:07:58.134 Model Number: QEMU NVMe Ctrl 00:07:58.134 Firmware Version: 8.0.0 00:07:58.135 Recommended Arb Burst: 6 00:07:58.135 IEEE OUI Identifier: 00 54 52 00:07:58.135 Multi-path I/O 00:07:58.135 May have multiple subsystem ports: No 00:07:58.135 May have multiple controllers: No 00:07:58.135 Associated with SR-IOV VF: No 00:07:58.135 Max Data Transfer Size: 524288 00:07:58.135 Max Number of Namespaces: 256 00:07:58.135 Max Number of I/O Queues: 64 00:07:58.135 NVMe Specification Version (VS): 1.4 00:07:58.135 NVMe Specification Version (Identify): 1.4 00:07:58.135 Maximum Queue Entries: 2048 00:07:58.135 Contiguous Queues Required: Yes 00:07:58.135 Arbitration Mechanisms Supported 00:07:58.135 Weighted Round Robin: Not Supported 00:07:58.135 Vendor Specific: Not Supported 00:07:58.135 Reset Timeout: 7500 ms 00:07:58.135 Doorbell Stride: 4 bytes 00:07:58.135 NVM Subsystem Reset: Not Supported 00:07:58.135 Command Sets Supported 00:07:58.135 NVM Command Set: Supported 00:07:58.135 Boot Partition: Not Supported 00:07:58.135 Memory Page Size Minimum: 4096 bytes 00:07:58.135 Memory Page Size Maximum: 65536 bytes 00:07:58.135 Persistent Memory Region: Not Supported 00:07:58.135 Optional Asynchronous Events Supported 00:07:58.135 Namespace Attribute Notices: Supported 00:07:58.135 Firmware Activation Notices: Not Supported 00:07:58.135 ANA Change Notices: Not Supported 00:07:58.135 PLE Aggregate Log Change Notices: Not Supported 00:07:58.135 LBA Status Info Alert Notices: Not Supported 00:07:58.135 EGE Aggregate Log Change Notices: Not Supported 00:07:58.135 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.135 Zone Descriptor Change Notices: Not Supported 00:07:58.135 Discovery Log Change Notices: Not Supported 00:07:58.135 Controller Attributes 00:07:58.135 128-bit Host Identifier: Not Supported 00:07:58.135 Non-Operational Permissive Mode: Not Supported 00:07:58.135 NVM Sets: Not Supported 00:07:58.135 Read Recovery Levels: Not Supported 00:07:58.135 Endurance Groups: Not Supported 00:07:58.135 Predictable Latency Mode: Not Supported 00:07:58.135 Traffic Based Keep ALive: Not Supported 00:07:58.135 Namespace Granularity: Not Supported 00:07:58.135 SQ Associations: Not Supported 00:07:58.135 UUID List: Not Supported 00:07:58.135 Multi-Domain Subsystem: Not Supported 00:07:58.135 Fixed Capacity Management: Not Supported 00:07:58.135 Variable Capacity Management: Not Supported 00:07:58.135 Delete Endurance Group: Not Supported 00:07:58.135 Delete NVM Set: Not Supported 00:07:58.135 Extended LBA Formats Supported: Supported 00:07:58.135 Flexible Data Placement Supported: Not Supported 00:07:58.135 00:07:58.135 Controller Memory Buffer Support 00:07:58.135 ================================ 00:07:58.135 Supported: No 00:07:58.135 00:07:58.135 Persistent Memory Region Support 00:07:58.135 ================================ 00:07:58.135 Supported: No 00:07:58.135 00:07:58.135 Admin Command Set Attributes 00:07:58.135 ============================ 00:07:58.135 Security Send/Receive: Not Supported 00:07:58.135 Format NVM: Supported 00:07:58.135 Firmware Activate/Download: Not Supported 00:07:58.135 Namespace Management: Supported 00:07:58.135 Device Self-Test: Not Supported 00:07:58.135 Directives: Supported 00:07:58.135 NVMe-MI: Not Supported 00:07:58.135 Virtualization Management: Not Supported 00:07:58.135 Doorbell Buffer Config: Supported 00:07:58.135 Get LBA Status Capability: Not Supported 00:07:58.135 Command & Feature Lockdown Capability: Not Supported 00:07:58.135 Abort Command Limit: 4 00:07:58.135 Async Event Request Limit: 4 00:07:58.135 Number of Firmware Slots: N/A 00:07:58.135 Firmware Slot 1 Read-Only: N/A 00:07:58.135 Firmware Activation Without Reset: N/A 00:07:58.135 Multiple Update Detection Support: N/A 00:07:58.135 Firmware Update Granularity: No Information Provided 00:07:58.135 Per-Namespace SMART Log: Yes 00:07:58.135 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.135 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:58.135 Command Effects Log Page: Supported 00:07:58.135 Get Log Page Extended Data: Supported 00:07:58.135 Telemetry Log Pages: Not Supported 00:07:58.135 Persistent Event Log Pages: Not Supported 00:07:58.135 Supported Log Pages Log Page: May Support 00:07:58.135 Commands Supported & Effects Log Page: Not Supported 00:07:58.135 Feature Identifiers & Effects Log Page:May Support 00:07:58.135 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.135 Data Area 4 for Telemetry Log: Not Supported 00:07:58.135 Error Log Page Entries Supported: 1 00:07:58.135 Keep Alive: Not Supported 00:07:58.135 00:07:58.135 NVM Command Set Attributes 00:07:58.135 ========================== 00:07:58.135 Submission Queue Entry Size 00:07:58.135 Max: 64 00:07:58.135 Min: 64 00:07:58.135 Completion Queue Entry Size 00:07:58.135 Max: 16 00:07:58.135 Min: 16 00:07:58.135 Number of Namespaces: 256 00:07:58.135 Compare Command: Supported 00:07:58.135 Write Uncorrectable Command: Not Supported 00:07:58.135 Dataset Management Command: Supported 00:07:58.135 Write Zeroes Command: Supported 00:07:58.135 Set Features Save Field: Supported 00:07:58.135 Reservations: Not Supported 00:07:58.135 Timestamp: Supported 00:07:58.135 Copy: Supported 00:07:58.135 Volatile Write Cache: Present 00:07:58.135 Atomic Write Unit (Normal): 1 00:07:58.135 Atomic Write Unit (PFail): 1 00:07:58.135 Atomic Compare & Write Unit: 1 00:07:58.135 Fused Compare & Write: Not Supported 00:07:58.135 Scatter-Gather List 00:07:58.135 SGL Command Set: Supported 00:07:58.135 SGL Keyed: Not Supported 00:07:58.135 SGL Bit Bucket Descriptor: Not Supported 00:07:58.135 SGL Metadata Pointer: Not Supported 00:07:58.135 Oversized SGL: Not Supported 00:07:58.135 SGL Metadata Address: Not Supported 00:07:58.135 SGL Offset: Not Supported 00:07:58.135 Transport SGL Data Block: Not Supported 00:07:58.135 Replay Protected Memory Block: Not Supported 00:07:58.135 00:07:58.135 Firmware Slot Information 00:07:58.135 ========================= 00:07:58.135 Active slot: 1 00:07:58.135 Slot 1 Firmware Revision: 1.0 00:07:58.135 00:07:58.135 00:07:58.135 Commands Supported and Effects 00:07:58.135 ============================== 00:07:58.135 Admin Commands 00:07:58.135 -------------- 00:07:58.135 Delete I/O Submission Queue (00h): Supported 00:07:58.135 Create I/O Submission Queue (01h): Supported 00:07:58.135 Get Log Page (02h): Supported 00:07:58.135 Delete I/O Completion Queue (04h): Supported 00:07:58.135 Create I/O Completion Queue (05h): Supported 00:07:58.135 Identify (06h): Supported 00:07:58.135 Abort (08h): Supported 00:07:58.135 Set Features (09h): Supported 00:07:58.135 Get Features (0Ah): Supported 00:07:58.135 Asynchronous Event Request (0Ch): Supported 00:07:58.135 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.135 Directive Send (19h): Supported 00:07:58.135 Directive Receive (1Ah): Supported 00:07:58.135 Virtualization Management (1Ch): Supported 00:07:58.135 Doorbell Buffer Config (7Ch): Supported 00:07:58.135 Format NVM (80h): Supported LBA-Change 00:07:58.135 I/O Commands 00:07:58.135 ------------ 00:07:58.135 Flush (00h): Supported LBA-Change 00:07:58.135 Write (01h): Supported LBA-Change 00:07:58.135 Read (02h): Supported 00:07:58.135 Compare (05h): Supported 00:07:58.135 Write Zeroes (08h): Supported LBA-Change 00:07:58.135 Dataset Management (09h): Supported LBA-Change 00:07:58.135 Unknown (0Ch): Supported 00:07:58.135 Unknown (12h): Supported 00:07:58.135 Copy (19h): Supported LBA-Change 00:07:58.135 Unknown (1Dh): Supported LBA-Change 00:07:58.135 00:07:58.135 Error Log 00:07:58.135 ========= 00:07:58.135 00:07:58.135 Arbitration 00:07:58.135 =========== 00:07:58.135 Arbitration Burst: no limit 00:07:58.135 00:07:58.135 Power Management 00:07:58.135 ================ 00:07:58.135 Number of Power States: 1 00:07:58.135 Current Power State: Power State #0 00:07:58.135 Power State #0: 00:07:58.135 Max Power: 25.00 W 00:07:58.135 Non-Operational State: Operational 00:07:58.135 Entry Latency: 16 microseconds 00:07:58.135 Exit Latency: 4 microseconds 00:07:58.135 Relative Read Throughput: 0 00:07:58.135 Relative Read Latency: 0 00:07:58.135 Relative Write Throughput: 0 00:07:58.135 Relative Write Latency: 0 00:07:58.135 Idle Power: Not Reported 00:07:58.135 Active Power: Not Reported 00:07:58.135 Non-Operational Permissive Mode: Not Supported 00:07:58.135 00:07:58.135 Health Information 00:07:58.135 ================== 00:07:58.135 Critical Warnings: 00:07:58.135 Available Spare Space: OK 00:07:58.135 Temperature: OK 00:07:58.135 Device Reliability: OK 00:07:58.135 Read Only: No 00:07:58.135 Volatile Memory Backup: OK 00:07:58.135 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.135 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.135 Available Spare: 0% 00:07:58.135 Available Spare Threshold: 0% 00:07:58.135 Life Percentage Used: 0% 00:07:58.135 Data Units Read: 634 00:07:58.135 Data Units Written: 562 00:07:58.135 Host Read Commands: 31974 00:07:58.135 Host Write Commands: 31760 00:07:58.135 Controller Busy Time: 0 minutes 00:07:58.135 Power Cycles: 0 00:07:58.135 Power On Hours: 0 hours 00:07:58.135 Unsafe Shutdowns: 0 00:07:58.135 Unrecoverable Media Errors: 0 00:07:58.135 Lifetime Error Log Entries: 0 00:07:58.135 Warning Temperature Time: 0 minutes 00:07:58.135 Critical Temperature Time: 0 minutes 00:07:58.135 00:07:58.135 Number of Queues 00:07:58.135 ================ 00:07:58.135 Number of I/O Submission Queues: 64 00:07:58.135 Number of I/O Completion Queues: 64 00:07:58.135 00:07:58.135 ZNS Specific Controller Data 00:07:58.135 ============================ 00:07:58.135 Zone Append Size Limit: 0 00:07:58.135 00:07:58.135 00:07:58.135 Active Namespaces 00:07:58.135 ================= 00:07:58.135 Namespace ID:1 00:07:58.135 Error Recovery Timeout: Unlimited 00:07:58.135 Command Set Identifier: NVM (00h) 00:07:58.135 Deallocate: Supported 00:07:58.135 Deallocated/Unwritten Error: Supported 00:07:58.135 Deallocated Read Value: All 0x00 00:07:58.135 Deallocate in Write Zeroes: Not Supported 00:07:58.135 Deallocated Guard Field: 0xFFFF 00:07:58.135 Flush: Supported 00:07:58.135 Reservation: Not Supported 00:07:58.135 Metadata Transferred as: Separate Metadata Buffer 00:07:58.135 Namespace Sharing Capabilities: Private 00:07:58.135 Size (in LBAs): 1548666 (5GiB) 00:07:58.135 Capacity (in LBAs): 1548666 (5GiB) 00:07:58.135 Utilization (in LBAs): 1548666 (5GiB) 00:07:58.135 Thin Provisioning: Not Supported 00:07:58.135 Per-NS Atomic Units: No 00:07:58.135 Maximum Single Source Range Length: 128 00:07:58.135 Maximum Copy Length: 128 00:07:58.135 Maximum Source Range Count: 128 00:07:58.136 NGUID/EUI64 Never Reused: No 00:07:58.136 Namespace Write Protected: No 00:07:58.136 Number of LBA Formats: 8 00:07:58.136 Current LBA Format: LBA Format #07 00:07:58.136 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.136 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.136 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.136 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.136 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.136 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.136 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.136 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.136 00:07:58.136 NVM Specific Namespace Data 00:07:58.136 =========================== 00:07:58.136 Logical Block Storage Tag Mask: 0 00:07:58.136 Protection Information Capabilities: 00:07:58.136 16b Guard Protection Information Storage Tag Support: No 00:07:58.136 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.136 Storage Tag Check Read Support: No 00:07:58.136 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.136 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.136 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.136 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.136 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.136 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.136 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.136 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.394 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:58.394 12:49:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:58.394 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:58.394 ===================================================== 00:07:58.394 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.394 ===================================================== 00:07:58.394 Controller Capabilities/Features 00:07:58.394 ================================ 00:07:58.394 Vendor ID: 1b36 00:07:58.394 Subsystem Vendor ID: 1af4 00:07:58.394 Serial Number: 12341 00:07:58.394 Model Number: QEMU NVMe Ctrl 00:07:58.394 Firmware Version: 8.0.0 00:07:58.394 Recommended Arb Burst: 6 00:07:58.394 IEEE OUI Identifier: 00 54 52 00:07:58.394 Multi-path I/O 00:07:58.394 May have multiple subsystem ports: No 00:07:58.394 May have multiple controllers: No 00:07:58.394 Associated with SR-IOV VF: No 00:07:58.394 Max Data Transfer Size: 524288 00:07:58.394 Max Number of Namespaces: 256 00:07:58.394 Max Number of I/O Queues: 64 00:07:58.394 NVMe Specification Version (VS): 1.4 00:07:58.394 NVMe Specification Version (Identify): 1.4 00:07:58.394 Maximum Queue Entries: 2048 00:07:58.394 Contiguous Queues Required: Yes 00:07:58.394 Arbitration Mechanisms Supported 00:07:58.394 Weighted Round Robin: Not Supported 00:07:58.394 Vendor Specific: Not Supported 00:07:58.394 Reset Timeout: 7500 ms 00:07:58.394 Doorbell Stride: 4 bytes 00:07:58.394 NVM Subsystem Reset: Not Supported 00:07:58.394 Command Sets Supported 00:07:58.394 NVM Command Set: Supported 00:07:58.394 Boot Partition: Not Supported 00:07:58.395 Memory Page Size Minimum: 4096 bytes 00:07:58.395 Memory Page Size Maximum: 65536 bytes 00:07:58.395 Persistent Memory Region: Not Supported 00:07:58.395 Optional Asynchronous Events Supported 00:07:58.395 Namespace Attribute Notices: Supported 00:07:58.395 Firmware Activation Notices: Not Supported 00:07:58.395 ANA Change Notices: Not Supported 00:07:58.395 PLE Aggregate Log Change Notices: Not Supported 00:07:58.395 LBA Status Info Alert Notices: Not Supported 00:07:58.395 EGE Aggregate Log Change Notices: Not Supported 00:07:58.395 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.395 Zone Descriptor Change Notices: Not Supported 00:07:58.395 Discovery Log Change Notices: Not Supported 00:07:58.395 Controller Attributes 00:07:58.395 128-bit Host Identifier: Not Supported 00:07:58.395 Non-Operational Permissive Mode: Not Supported 00:07:58.395 NVM Sets: Not Supported 00:07:58.395 Read Recovery Levels: Not Supported 00:07:58.395 Endurance Groups: Not Supported 00:07:58.395 Predictable Latency Mode: Not Supported 00:07:58.395 Traffic Based Keep ALive: Not Supported 00:07:58.395 Namespace Granularity: Not Supported 00:07:58.395 SQ Associations: Not Supported 00:07:58.395 UUID List: Not Supported 00:07:58.395 Multi-Domain Subsystem: Not Supported 00:07:58.395 Fixed Capacity Management: Not Supported 00:07:58.395 Variable Capacity Management: Not Supported 00:07:58.395 Delete Endurance Group: Not Supported 00:07:58.395 Delete NVM Set: Not Supported 00:07:58.395 Extended LBA Formats Supported: Supported 00:07:58.395 Flexible Data Placement Supported: Not Supported 00:07:58.395 00:07:58.395 Controller Memory Buffer Support 00:07:58.395 ================================ 00:07:58.395 Supported: No 00:07:58.395 00:07:58.395 Persistent Memory Region Support 00:07:58.395 ================================ 00:07:58.395 Supported: No 00:07:58.395 00:07:58.395 Admin Command Set Attributes 00:07:58.395 ============================ 00:07:58.395 Security Send/Receive: Not Supported 00:07:58.395 Format NVM: Supported 00:07:58.395 Firmware Activate/Download: Not Supported 00:07:58.395 Namespace Management: Supported 00:07:58.395 Device Self-Test: Not Supported 00:07:58.395 Directives: Supported 00:07:58.395 NVMe-MI: Not Supported 00:07:58.395 Virtualization Management: Not Supported 00:07:58.395 Doorbell Buffer Config: Supported 00:07:58.395 Get LBA Status Capability: Not Supported 00:07:58.395 Command & Feature Lockdown Capability: Not Supported 00:07:58.395 Abort Command Limit: 4 00:07:58.395 Async Event Request Limit: 4 00:07:58.395 Number of Firmware Slots: N/A 00:07:58.395 Firmware Slot 1 Read-Only: N/A 00:07:58.395 Firmware Activation Without Reset: N/A 00:07:58.395 Multiple Update Detection Support: N/A 00:07:58.395 Firmware Update Granularity: No Information Provided 00:07:58.395 Per-Namespace SMART Log: Yes 00:07:58.395 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.395 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:58.395 Command Effects Log Page: Supported 00:07:58.395 Get Log Page Extended Data: Supported 00:07:58.395 Telemetry Log Pages: Not Supported 00:07:58.395 Persistent Event Log Pages: Not Supported 00:07:58.395 Supported Log Pages Log Page: May Support 00:07:58.395 Commands Supported & Effects Log Page: Not Supported 00:07:58.395 Feature Identifiers & Effects Log Page:May Support 00:07:58.395 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.395 Data Area 4 for Telemetry Log: Not Supported 00:07:58.395 Error Log Page Entries Supported: 1 00:07:58.395 Keep Alive: Not Supported 00:07:58.395 00:07:58.395 NVM Command Set Attributes 00:07:58.395 ========================== 00:07:58.395 Submission Queue Entry Size 00:07:58.395 Max: 64 00:07:58.395 Min: 64 00:07:58.395 Completion Queue Entry Size 00:07:58.395 Max: 16 00:07:58.395 Min: 16 00:07:58.395 Number of Namespaces: 256 00:07:58.395 Compare Command: Supported 00:07:58.395 Write Uncorrectable Command: Not Supported 00:07:58.395 Dataset Management Command: Supported 00:07:58.395 Write Zeroes Command: Supported 00:07:58.395 Set Features Save Field: Supported 00:07:58.395 Reservations: Not Supported 00:07:58.395 Timestamp: Supported 00:07:58.395 Copy: Supported 00:07:58.395 Volatile Write Cache: Present 00:07:58.395 Atomic Write Unit (Normal): 1 00:07:58.395 Atomic Write Unit (PFail): 1 00:07:58.395 Atomic Compare & Write Unit: 1 00:07:58.395 Fused Compare & Write: Not Supported 00:07:58.395 Scatter-Gather List 00:07:58.395 SGL Command Set: Supported 00:07:58.395 SGL Keyed: Not Supported 00:07:58.395 SGL Bit Bucket Descriptor: Not Supported 00:07:58.395 SGL Metadata Pointer: Not Supported 00:07:58.395 Oversized SGL: Not Supported 00:07:58.395 SGL Metadata Address: Not Supported 00:07:58.395 SGL Offset: Not Supported 00:07:58.395 Transport SGL Data Block: Not Supported 00:07:58.395 Replay Protected Memory Block: Not Supported 00:07:58.395 00:07:58.395 Firmware Slot Information 00:07:58.395 ========================= 00:07:58.395 Active slot: 1 00:07:58.395 Slot 1 Firmware Revision: 1.0 00:07:58.395 00:07:58.395 00:07:58.395 Commands Supported and Effects 00:07:58.395 ============================== 00:07:58.395 Admin Commands 00:07:58.395 -------------- 00:07:58.395 Delete I/O Submission Queue (00h): Supported 00:07:58.395 Create I/O Submission Queue (01h): Supported 00:07:58.395 Get Log Page (02h): Supported 00:07:58.395 Delete I/O Completion Queue (04h): Supported 00:07:58.395 Create I/O Completion Queue (05h): Supported 00:07:58.395 Identify (06h): Supported 00:07:58.395 Abort (08h): Supported 00:07:58.395 Set Features (09h): Supported 00:07:58.395 Get Features (0Ah): Supported 00:07:58.395 Asynchronous Event Request (0Ch): Supported 00:07:58.395 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.395 Directive Send (19h): Supported 00:07:58.395 Directive Receive (1Ah): Supported 00:07:58.395 Virtualization Management (1Ch): Supported 00:07:58.395 Doorbell Buffer Config (7Ch): Supported 00:07:58.395 Format NVM (80h): Supported LBA-Change 00:07:58.395 I/O Commands 00:07:58.395 ------------ 00:07:58.395 Flush (00h): Supported LBA-Change 00:07:58.395 Write (01h): Supported LBA-Change 00:07:58.395 Read (02h): Supported 00:07:58.395 Compare (05h): Supported 00:07:58.395 Write Zeroes (08h): Supported LBA-Change 00:07:58.395 Dataset Management (09h): Supported LBA-Change 00:07:58.395 Unknown (0Ch): Supported 00:07:58.395 Unknown (12h): Supported 00:07:58.395 Copy (19h): Supported LBA-Change 00:07:58.395 Unknown (1Dh): Supported LBA-Change 00:07:58.395 00:07:58.395 Error Log 00:07:58.395 ========= 00:07:58.395 00:07:58.395 Arbitration 00:07:58.395 =========== 00:07:58.395 Arbitration Burst: no limit 00:07:58.395 00:07:58.395 Power Management 00:07:58.395 ================ 00:07:58.395 Number of Power States: 1 00:07:58.395 Current Power State: Power State #0 00:07:58.395 Power State #0: 00:07:58.395 Max Power: 25.00 W 00:07:58.395 Non-Operational State: Operational 00:07:58.395 Entry Latency: 16 microseconds 00:07:58.395 Exit Latency: 4 microseconds 00:07:58.395 Relative Read Throughput: 0 00:07:58.395 Relative Read Latency: 0 00:07:58.395 Relative Write Throughput: 0 00:07:58.395 Relative Write Latency: 0 00:07:58.655 Idle Power: Not Reported 00:07:58.655 Active Power: Not Reported 00:07:58.655 Non-Operational Permissive Mode: Not Supported 00:07:58.655 00:07:58.655 Health Information 00:07:58.655 ================== 00:07:58.655 Critical Warnings: 00:07:58.655 Available Spare Space: OK 00:07:58.655 Temperature: OK 00:07:58.655 Device Reliability: OK 00:07:58.655 Read Only: No 00:07:58.655 Volatile Memory Backup: OK 00:07:58.655 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.655 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.655 Available Spare: 0% 00:07:58.655 Available Spare Threshold: 0% 00:07:58.655 Life Percentage Used: 0% 00:07:58.655 Data Units Read: 954 00:07:58.655 Data Units Written: 822 00:07:58.655 Host Read Commands: 47360 00:07:58.655 Host Write Commands: 46149 00:07:58.655 Controller Busy Time: 0 minutes 00:07:58.655 Power Cycles: 0 00:07:58.655 Power On Hours: 0 hours 00:07:58.655 Unsafe Shutdowns: 0 00:07:58.655 Unrecoverable Media Errors: 0 00:07:58.655 Lifetime Error Log Entries: 0 00:07:58.655 Warning Temperature Time: 0 minutes 00:07:58.655 Critical Temperature Time: 0 minutes 00:07:58.655 00:07:58.655 Number of Queues 00:07:58.655 ================ 00:07:58.655 Number of I/O Submission Queues: 64 00:07:58.655 Number of I/O Completion Queues: 64 00:07:58.655 00:07:58.655 ZNS Specific Controller Data 00:07:58.655 ============================ 00:07:58.655 Zone Append Size Limit: 0 00:07:58.655 00:07:58.655 00:07:58.655 Active Namespaces 00:07:58.655 ================= 00:07:58.655 Namespace ID:1 00:07:58.655 Error Recovery Timeout: Unlimited 00:07:58.655 Command Set Identifier: NVM (00h) 00:07:58.655 Deallocate: Supported 00:07:58.655 Deallocated/Unwritten Error: Supported 00:07:58.655 Deallocated Read Value: All 0x00 00:07:58.655 Deallocate in Write Zeroes: Not Supported 00:07:58.655 Deallocated Guard Field: 0xFFFF 00:07:58.655 Flush: Supported 00:07:58.655 Reservation: Not Supported 00:07:58.655 Namespace Sharing Capabilities: Private 00:07:58.655 Size (in LBAs): 1310720 (5GiB) 00:07:58.655 Capacity (in LBAs): 1310720 (5GiB) 00:07:58.655 Utilization (in LBAs): 1310720 (5GiB) 00:07:58.655 Thin Provisioning: Not Supported 00:07:58.655 Per-NS Atomic Units: No 00:07:58.655 Maximum Single Source Range Length: 128 00:07:58.655 Maximum Copy Length: 128 00:07:58.655 Maximum Source Range Count: 128 00:07:58.655 NGUID/EUI64 Never Reused: No 00:07:58.655 Namespace Write Protected: No 00:07:58.655 Number of LBA Formats: 8 00:07:58.655 Current LBA Format: LBA Format #04 00:07:58.655 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.655 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.655 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.655 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.655 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.655 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.655 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.655 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.655 00:07:58.655 NVM Specific Namespace Data 00:07:58.655 =========================== 00:07:58.655 Logical Block Storage Tag Mask: 0 00:07:58.655 Protection Information Capabilities: 00:07:58.655 16b Guard Protection Information Storage Tag Support: No 00:07:58.655 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.655 Storage Tag Check Read Support: No 00:07:58.655 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.655 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.655 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.655 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.655 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.655 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.655 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.655 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.655 12:49:50 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:58.655 12:49:50 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:58.655 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:58.655 ===================================================== 00:07:58.655 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.655 ===================================================== 00:07:58.655 Controller Capabilities/Features 00:07:58.655 ================================ 00:07:58.655 Vendor ID: 1b36 00:07:58.655 Subsystem Vendor ID: 1af4 00:07:58.655 Serial Number: 12342 00:07:58.655 Model Number: QEMU NVMe Ctrl 00:07:58.655 Firmware Version: 8.0.0 00:07:58.655 Recommended Arb Burst: 6 00:07:58.655 IEEE OUI Identifier: 00 54 52 00:07:58.655 Multi-path I/O 00:07:58.655 May have multiple subsystem ports: No 00:07:58.655 May have multiple controllers: No 00:07:58.655 Associated with SR-IOV VF: No 00:07:58.655 Max Data Transfer Size: 524288 00:07:58.655 Max Number of Namespaces: 256 00:07:58.655 Max Number of I/O Queues: 64 00:07:58.655 NVMe Specification Version (VS): 1.4 00:07:58.655 NVMe Specification Version (Identify): 1.4 00:07:58.655 Maximum Queue Entries: 2048 00:07:58.655 Contiguous Queues Required: Yes 00:07:58.655 Arbitration Mechanisms Supported 00:07:58.655 Weighted Round Robin: Not Supported 00:07:58.655 Vendor Specific: Not Supported 00:07:58.655 Reset Timeout: 7500 ms 00:07:58.655 Doorbell Stride: 4 bytes 00:07:58.655 NVM Subsystem Reset: Not Supported 00:07:58.655 Command Sets Supported 00:07:58.655 NVM Command Set: Supported 00:07:58.655 Boot Partition: Not Supported 00:07:58.655 Memory Page Size Minimum: 4096 bytes 00:07:58.655 Memory Page Size Maximum: 65536 bytes 00:07:58.655 Persistent Memory Region: Not Supported 00:07:58.655 Optional Asynchronous Events Supported 00:07:58.655 Namespace Attribute Notices: Supported 00:07:58.655 Firmware Activation Notices: Not Supported 00:07:58.655 ANA Change Notices: Not Supported 00:07:58.655 PLE Aggregate Log Change Notices: Not Supported 00:07:58.655 LBA Status Info Alert Notices: Not Supported 00:07:58.655 EGE Aggregate Log Change Notices: Not Supported 00:07:58.655 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.655 Zone Descriptor Change Notices: Not Supported 00:07:58.655 Discovery Log Change Notices: Not Supported 00:07:58.655 Controller Attributes 00:07:58.655 128-bit Host Identifier: Not Supported 00:07:58.655 Non-Operational Permissive Mode: Not Supported 00:07:58.655 NVM Sets: Not Supported 00:07:58.655 Read Recovery Levels: Not Supported 00:07:58.655 Endurance Groups: Not Supported 00:07:58.655 Predictable Latency Mode: Not Supported 00:07:58.655 Traffic Based Keep ALive: Not Supported 00:07:58.655 Namespace Granularity: Not Supported 00:07:58.655 SQ Associations: Not Supported 00:07:58.655 UUID List: Not Supported 00:07:58.655 Multi-Domain Subsystem: Not Supported 00:07:58.655 Fixed Capacity Management: Not Supported 00:07:58.655 Variable Capacity Management: Not Supported 00:07:58.655 Delete Endurance Group: Not Supported 00:07:58.655 Delete NVM Set: Not Supported 00:07:58.656 Extended LBA Formats Supported: Supported 00:07:58.656 Flexible Data Placement Supported: Not Supported 00:07:58.656 00:07:58.656 Controller Memory Buffer Support 00:07:58.656 ================================ 00:07:58.656 Supported: No 00:07:58.656 00:07:58.656 Persistent Memory Region Support 00:07:58.656 ================================ 00:07:58.656 Supported: No 00:07:58.656 00:07:58.656 Admin Command Set Attributes 00:07:58.656 ============================ 00:07:58.656 Security Send/Receive: Not Supported 00:07:58.656 Format NVM: Supported 00:07:58.656 Firmware Activate/Download: Not Supported 00:07:58.656 Namespace Management: Supported 00:07:58.656 Device Self-Test: Not Supported 00:07:58.656 Directives: Supported 00:07:58.656 NVMe-MI: Not Supported 00:07:58.656 Virtualization Management: Not Supported 00:07:58.656 Doorbell Buffer Config: Supported 00:07:58.656 Get LBA Status Capability: Not Supported 00:07:58.656 Command & Feature Lockdown Capability: Not Supported 00:07:58.656 Abort Command Limit: 4 00:07:58.656 Async Event Request Limit: 4 00:07:58.656 Number of Firmware Slots: N/A 00:07:58.656 Firmware Slot 1 Read-Only: N/A 00:07:58.656 Firmware Activation Without Reset: N/A 00:07:58.656 Multiple Update Detection Support: N/A 00:07:58.656 Firmware Update Granularity: No Information Provided 00:07:58.656 Per-Namespace SMART Log: Yes 00:07:58.656 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.656 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:58.656 Command Effects Log Page: Supported 00:07:58.656 Get Log Page Extended Data: Supported 00:07:58.656 Telemetry Log Pages: Not Supported 00:07:58.656 Persistent Event Log Pages: Not Supported 00:07:58.656 Supported Log Pages Log Page: May Support 00:07:58.656 Commands Supported & Effects Log Page: Not Supported 00:07:58.656 Feature Identifiers & Effects Log Page:May Support 00:07:58.656 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.656 Data Area 4 for Telemetry Log: Not Supported 00:07:58.656 Error Log Page Entries Supported: 1 00:07:58.656 Keep Alive: Not Supported 00:07:58.656 00:07:58.656 NVM Command Set Attributes 00:07:58.656 ========================== 00:07:58.656 Submission Queue Entry Size 00:07:58.656 Max: 64 00:07:58.656 Min: 64 00:07:58.656 Completion Queue Entry Size 00:07:58.656 Max: 16 00:07:58.656 Min: 16 00:07:58.656 Number of Namespaces: 256 00:07:58.656 Compare Command: Supported 00:07:58.656 Write Uncorrectable Command: Not Supported 00:07:58.656 Dataset Management Command: Supported 00:07:58.656 Write Zeroes Command: Supported 00:07:58.656 Set Features Save Field: Supported 00:07:58.656 Reservations: Not Supported 00:07:58.656 Timestamp: Supported 00:07:58.656 Copy: Supported 00:07:58.656 Volatile Write Cache: Present 00:07:58.656 Atomic Write Unit (Normal): 1 00:07:58.656 Atomic Write Unit (PFail): 1 00:07:58.656 Atomic Compare & Write Unit: 1 00:07:58.656 Fused Compare & Write: Not Supported 00:07:58.656 Scatter-Gather List 00:07:58.656 SGL Command Set: Supported 00:07:58.656 SGL Keyed: Not Supported 00:07:58.656 SGL Bit Bucket Descriptor: Not Supported 00:07:58.656 SGL Metadata Pointer: Not Supported 00:07:58.656 Oversized SGL: Not Supported 00:07:58.656 SGL Metadata Address: Not Supported 00:07:58.656 SGL Offset: Not Supported 00:07:58.656 Transport SGL Data Block: Not Supported 00:07:58.656 Replay Protected Memory Block: Not Supported 00:07:58.656 00:07:58.656 Firmware Slot Information 00:07:58.656 ========================= 00:07:58.656 Active slot: 1 00:07:58.656 Slot 1 Firmware Revision: 1.0 00:07:58.656 00:07:58.656 00:07:58.656 Commands Supported and Effects 00:07:58.656 ============================== 00:07:58.656 Admin Commands 00:07:58.656 -------------- 00:07:58.656 Delete I/O Submission Queue (00h): Supported 00:07:58.656 Create I/O Submission Queue (01h): Supported 00:07:58.656 Get Log Page (02h): Supported 00:07:58.656 Delete I/O Completion Queue (04h): Supported 00:07:58.656 Create I/O Completion Queue (05h): Supported 00:07:58.656 Identify (06h): Supported 00:07:58.656 Abort (08h): Supported 00:07:58.656 Set Features (09h): Supported 00:07:58.656 Get Features (0Ah): Supported 00:07:58.656 Asynchronous Event Request (0Ch): Supported 00:07:58.656 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.656 Directive Send (19h): Supported 00:07:58.656 Directive Receive (1Ah): Supported 00:07:58.656 Virtualization Management (1Ch): Supported 00:07:58.656 Doorbell Buffer Config (7Ch): Supported 00:07:58.656 Format NVM (80h): Supported LBA-Change 00:07:58.656 I/O Commands 00:07:58.656 ------------ 00:07:58.656 Flush (00h): Supported LBA-Change 00:07:58.656 Write (01h): Supported LBA-Change 00:07:58.656 Read (02h): Supported 00:07:58.656 Compare (05h): Supported 00:07:58.656 Write Zeroes (08h): Supported LBA-Change 00:07:58.656 Dataset Management (09h): Supported LBA-Change 00:07:58.656 Unknown (0Ch): Supported 00:07:58.656 Unknown (12h): Supported 00:07:58.656 Copy (19h): Supported LBA-Change 00:07:58.656 Unknown (1Dh): Supported LBA-Change 00:07:58.656 00:07:58.656 Error Log 00:07:58.656 ========= 00:07:58.656 00:07:58.656 Arbitration 00:07:58.656 =========== 00:07:58.656 Arbitration Burst: no limit 00:07:58.656 00:07:58.656 Power Management 00:07:58.656 ================ 00:07:58.656 Number of Power States: 1 00:07:58.656 Current Power State: Power State #0 00:07:58.656 Power State #0: 00:07:58.656 Max Power: 25.00 W 00:07:58.656 Non-Operational State: Operational 00:07:58.656 Entry Latency: 16 microseconds 00:07:58.656 Exit Latency: 4 microseconds 00:07:58.656 Relative Read Throughput: 0 00:07:58.656 Relative Read Latency: 0 00:07:58.656 Relative Write Throughput: 0 00:07:58.656 Relative Write Latency: 0 00:07:58.656 Idle Power: Not Reported 00:07:58.656 Active Power: Not Reported 00:07:58.656 Non-Operational Permissive Mode: Not Supported 00:07:58.656 00:07:58.656 Health Information 00:07:58.656 ================== 00:07:58.656 Critical Warnings: 00:07:58.656 Available Spare Space: OK 00:07:58.656 Temperature: OK 00:07:58.656 Device Reliability: OK 00:07:58.656 Read Only: No 00:07:58.656 Volatile Memory Backup: OK 00:07:58.656 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.656 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.656 Available Spare: 0% 00:07:58.656 Available Spare Threshold: 0% 00:07:58.656 Life Percentage Used: 0% 00:07:58.656 Data Units Read: 1997 00:07:58.656 Data Units Written: 1784 00:07:58.656 Host Read Commands: 98398 00:07:58.656 Host Write Commands: 96667 00:07:58.656 Controller Busy Time: 0 minutes 00:07:58.656 Power Cycles: 0 00:07:58.656 Power On Hours: 0 hours 00:07:58.656 Unsafe Shutdowns: 0 00:07:58.656 Unrecoverable Media Errors: 0 00:07:58.656 Lifetime Error Log Entries: 0 00:07:58.656 Warning Temperature Time: 0 minutes 00:07:58.656 Critical Temperature Time: 0 minutes 00:07:58.656 00:07:58.656 Number of Queues 00:07:58.656 ================ 00:07:58.656 Number of I/O Submission Queues: 64 00:07:58.656 Number of I/O Completion Queues: 64 00:07:58.656 00:07:58.656 ZNS Specific Controller Data 00:07:58.656 ============================ 00:07:58.656 Zone Append Size Limit: 0 00:07:58.656 00:07:58.656 00:07:58.656 Active Namespaces 00:07:58.656 ================= 00:07:58.656 Namespace ID:1 00:07:58.656 Error Recovery Timeout: Unlimited 00:07:58.656 Command Set Identifier: NVM (00h) 00:07:58.656 Deallocate: Supported 00:07:58.656 Deallocated/Unwritten Error: Supported 00:07:58.656 Deallocated Read Value: All 0x00 00:07:58.656 Deallocate in Write Zeroes: Not Supported 00:07:58.656 Deallocated Guard Field: 0xFFFF 00:07:58.656 Flush: Supported 00:07:58.656 Reservation: Not Supported 00:07:58.656 Namespace Sharing Capabilities: Private 00:07:58.656 Size (in LBAs): 1048576 (4GiB) 00:07:58.656 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.656 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.656 Thin Provisioning: Not Supported 00:07:58.656 Per-NS Atomic Units: No 00:07:58.656 Maximum Single Source Range Length: 128 00:07:58.656 Maximum Copy Length: 128 00:07:58.656 Maximum Source Range Count: 128 00:07:58.656 NGUID/EUI64 Never Reused: No 00:07:58.656 Namespace Write Protected: No 00:07:58.656 Number of LBA Formats: 8 00:07:58.656 Current LBA Format: LBA Format #04 00:07:58.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.656 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.656 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.656 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.656 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.656 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.656 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.656 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.656 00:07:58.657 NVM Specific Namespace Data 00:07:58.657 =========================== 00:07:58.657 Logical Block Storage Tag Mask: 0 00:07:58.657 Protection Information Capabilities: 00:07:58.657 16b Guard Protection Information Storage Tag Support: No 00:07:58.657 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.657 Storage Tag Check Read Support: No 00:07:58.657 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Namespace ID:2 00:07:58.657 Error Recovery Timeout: Unlimited 00:07:58.657 Command Set Identifier: NVM (00h) 00:07:58.657 Deallocate: Supported 00:07:58.657 Deallocated/Unwritten Error: Supported 00:07:58.657 Deallocated Read Value: All 0x00 00:07:58.657 Deallocate in Write Zeroes: Not Supported 00:07:58.657 Deallocated Guard Field: 0xFFFF 00:07:58.657 Flush: Supported 00:07:58.657 Reservation: Not Supported 00:07:58.657 Namespace Sharing Capabilities: Private 00:07:58.657 Size (in LBAs): 1048576 (4GiB) 00:07:58.657 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.657 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.657 Thin Provisioning: Not Supported 00:07:58.657 Per-NS Atomic Units: No 00:07:58.657 Maximum Single Source Range Length: 128 00:07:58.657 Maximum Copy Length: 128 00:07:58.657 Maximum Source Range Count: 128 00:07:58.657 NGUID/EUI64 Never Reused: No 00:07:58.657 Namespace Write Protected: No 00:07:58.657 Number of LBA Formats: 8 00:07:58.657 Current LBA Format: LBA Format #04 00:07:58.657 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.657 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.657 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.657 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.657 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.657 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.657 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.657 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.657 00:07:58.657 NVM Specific Namespace Data 00:07:58.657 =========================== 00:07:58.657 Logical Block Storage Tag Mask: 0 00:07:58.657 Protection Information Capabilities: 00:07:58.657 16b Guard Protection Information Storage Tag Support: No 00:07:58.657 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.657 Storage Tag Check Read Support: No 00:07:58.657 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.657 Namespace ID:3 00:07:58.657 Error Recovery Timeout: Unlimited 00:07:58.657 Command Set Identifier: NVM (00h) 00:07:58.657 Deallocate: Supported 00:07:58.657 Deallocated/Unwritten Error: Supported 00:07:58.657 Deallocated Read Value: All 0x00 00:07:58.657 Deallocate in Write Zeroes: Not Supported 00:07:58.657 Deallocated Guard Field: 0xFFFF 00:07:58.657 Flush: Supported 00:07:58.657 Reservation: Not Supported 00:07:58.657 Namespace Sharing Capabilities: Private 00:07:58.657 Size (in LBAs): 1048576 (4GiB) 00:07:58.657 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.657 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.657 Thin Provisioning: Not Supported 00:07:58.657 Per-NS Atomic Units: No 00:07:58.657 Maximum Single Source Range Length: 128 00:07:58.657 Maximum Copy Length: 128 00:07:58.657 Maximum Source Range Count: 128 00:07:58.657 NGUID/EUI64 Never Reused: No 00:07:58.657 Namespace Write Protected: No 00:07:58.657 Number of LBA Formats: 8 00:07:58.657 Current LBA Format: LBA Format #04 00:07:58.657 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.657 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.657 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.657 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.657 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.657 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.657 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.657 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.657 00:07:58.657 NVM Specific Namespace Data 00:07:58.657 =========================== 00:07:58.657 Logical Block Storage Tag Mask: 0 00:07:58.657 Protection Information Capabilities: 00:07:58.657 16b Guard Protection Information Storage Tag Support: No 00:07:58.657 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.916 Storage Tag Check Read Support: No 00:07:58.916 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.916 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.916 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.916 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.916 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.916 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.916 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.916 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.916 12:49:50 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:58.916 12:49:50 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:58.916 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:58.916 ===================================================== 00:07:58.916 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.916 ===================================================== 00:07:58.916 Controller Capabilities/Features 00:07:58.916 ================================ 00:07:58.916 Vendor ID: 1b36 00:07:58.916 Subsystem Vendor ID: 1af4 00:07:58.916 Serial Number: 12343 00:07:58.916 Model Number: QEMU NVMe Ctrl 00:07:58.916 Firmware Version: 8.0.0 00:07:58.916 Recommended Arb Burst: 6 00:07:58.916 IEEE OUI Identifier: 00 54 52 00:07:58.916 Multi-path I/O 00:07:58.916 May have multiple subsystem ports: No 00:07:58.916 May have multiple controllers: Yes 00:07:58.916 Associated with SR-IOV VF: No 00:07:58.916 Max Data Transfer Size: 524288 00:07:58.916 Max Number of Namespaces: 256 00:07:58.916 Max Number of I/O Queues: 64 00:07:58.916 NVMe Specification Version (VS): 1.4 00:07:58.916 NVMe Specification Version (Identify): 1.4 00:07:58.916 Maximum Queue Entries: 2048 00:07:58.916 Contiguous Queues Required: Yes 00:07:58.916 Arbitration Mechanisms Supported 00:07:58.916 Weighted Round Robin: Not Supported 00:07:58.916 Vendor Specific: Not Supported 00:07:58.916 Reset Timeout: 7500 ms 00:07:58.916 Doorbell Stride: 4 bytes 00:07:58.916 NVM Subsystem Reset: Not Supported 00:07:58.916 Command Sets Supported 00:07:58.916 NVM Command Set: Supported 00:07:58.916 Boot Partition: Not Supported 00:07:58.916 Memory Page Size Minimum: 4096 bytes 00:07:58.916 Memory Page Size Maximum: 65536 bytes 00:07:58.916 Persistent Memory Region: Not Supported 00:07:58.916 Optional Asynchronous Events Supported 00:07:58.916 Namespace Attribute Notices: Supported 00:07:58.916 Firmware Activation Notices: Not Supported 00:07:58.916 ANA Change Notices: Not Supported 00:07:58.916 PLE Aggregate Log Change Notices: Not Supported 00:07:58.916 LBA Status Info Alert Notices: Not Supported 00:07:58.916 EGE Aggregate Log Change Notices: Not Supported 00:07:58.916 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.916 Zone Descriptor Change Notices: Not Supported 00:07:58.916 Discovery Log Change Notices: Not Supported 00:07:58.916 Controller Attributes 00:07:58.916 128-bit Host Identifier: Not Supported 00:07:58.916 Non-Operational Permissive Mode: Not Supported 00:07:58.916 NVM Sets: Not Supported 00:07:58.916 Read Recovery Levels: Not Supported 00:07:58.916 Endurance Groups: Supported 00:07:58.916 Predictable Latency Mode: Not Supported 00:07:58.916 Traffic Based Keep ALive: Not Supported 00:07:58.916 Namespace Granularity: Not Supported 00:07:58.916 SQ Associations: Not Supported 00:07:58.916 UUID List: Not Supported 00:07:58.916 Multi-Domain Subsystem: Not Supported 00:07:58.916 Fixed Capacity Management: Not Supported 00:07:58.916 Variable Capacity Management: Not Supported 00:07:58.916 Delete Endurance Group: Not Supported 00:07:58.916 Delete NVM Set: Not Supported 00:07:58.916 Extended LBA Formats Supported: Supported 00:07:58.916 Flexible Data Placement Supported: Supported 00:07:58.917 00:07:58.917 Controller Memory Buffer Support 00:07:58.917 ================================ 00:07:58.917 Supported: No 00:07:58.917 00:07:58.917 Persistent Memory Region Support 00:07:58.917 ================================ 00:07:58.917 Supported: No 00:07:58.917 00:07:58.917 Admin Command Set Attributes 00:07:58.917 ============================ 00:07:58.917 Security Send/Receive: Not Supported 00:07:58.917 Format NVM: Supported 00:07:58.917 Firmware Activate/Download: Not Supported 00:07:58.917 Namespace Management: Supported 00:07:58.917 Device Self-Test: Not Supported 00:07:58.917 Directives: Supported 00:07:58.917 NVMe-MI: Not Supported 00:07:58.917 Virtualization Management: Not Supported 00:07:58.917 Doorbell Buffer Config: Supported 00:07:58.917 Get LBA Status Capability: Not Supported 00:07:58.917 Command & Feature Lockdown Capability: Not Supported 00:07:58.917 Abort Command Limit: 4 00:07:58.917 Async Event Request Limit: 4 00:07:58.917 Number of Firmware Slots: N/A 00:07:58.917 Firmware Slot 1 Read-Only: N/A 00:07:58.917 Firmware Activation Without Reset: N/A 00:07:58.917 Multiple Update Detection Support: N/A 00:07:59.176 Firmware Update Granularity: No Information Provided 00:07:59.176 Per-Namespace SMART Log: Yes 00:07:59.176 Asymmetric Namespace Access Log Page: Not Supported 00:07:59.176 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:59.176 Command Effects Log Page: Supported 00:07:59.176 Get Log Page Extended Data: Supported 00:07:59.176 Telemetry Log Pages: Not Supported 00:07:59.176 Persistent Event Log Pages: Not Supported 00:07:59.176 Supported Log Pages Log Page: May Support 00:07:59.176 Commands Supported & Effects Log Page: Not Supported 00:07:59.176 Feature Identifiers & Effects Log Page:May Support 00:07:59.176 NVMe-MI Commands & Effects Log Page: May Support 00:07:59.176 Data Area 4 for Telemetry Log: Not Supported 00:07:59.176 Error Log Page Entries Supported: 1 00:07:59.176 Keep Alive: Not Supported 00:07:59.176 00:07:59.176 NVM Command Set Attributes 00:07:59.176 ========================== 00:07:59.176 Submission Queue Entry Size 00:07:59.176 Max: 64 00:07:59.176 Min: 64 00:07:59.176 Completion Queue Entry Size 00:07:59.176 Max: 16 00:07:59.176 Min: 16 00:07:59.176 Number of Namespaces: 256 00:07:59.176 Compare Command: Supported 00:07:59.176 Write Uncorrectable Command: Not Supported 00:07:59.176 Dataset Management Command: Supported 00:07:59.176 Write Zeroes Command: Supported 00:07:59.176 Set Features Save Field: Supported 00:07:59.176 Reservations: Not Supported 00:07:59.176 Timestamp: Supported 00:07:59.176 Copy: Supported 00:07:59.176 Volatile Write Cache: Present 00:07:59.176 Atomic Write Unit (Normal): 1 00:07:59.176 Atomic Write Unit (PFail): 1 00:07:59.176 Atomic Compare & Write Unit: 1 00:07:59.176 Fused Compare & Write: Not Supported 00:07:59.176 Scatter-Gather List 00:07:59.176 SGL Command Set: Supported 00:07:59.176 SGL Keyed: Not Supported 00:07:59.176 SGL Bit Bucket Descriptor: Not Supported 00:07:59.176 SGL Metadata Pointer: Not Supported 00:07:59.176 Oversized SGL: Not Supported 00:07:59.176 SGL Metadata Address: Not Supported 00:07:59.176 SGL Offset: Not Supported 00:07:59.176 Transport SGL Data Block: Not Supported 00:07:59.176 Replay Protected Memory Block: Not Supported 00:07:59.176 00:07:59.176 Firmware Slot Information 00:07:59.176 ========================= 00:07:59.176 Active slot: 1 00:07:59.176 Slot 1 Firmware Revision: 1.0 00:07:59.176 00:07:59.176 00:07:59.176 Commands Supported and Effects 00:07:59.176 ============================== 00:07:59.176 Admin Commands 00:07:59.176 -------------- 00:07:59.176 Delete I/O Submission Queue (00h): Supported 00:07:59.176 Create I/O Submission Queue (01h): Supported 00:07:59.176 Get Log Page (02h): Supported 00:07:59.176 Delete I/O Completion Queue (04h): Supported 00:07:59.176 Create I/O Completion Queue (05h): Supported 00:07:59.176 Identify (06h): Supported 00:07:59.176 Abort (08h): Supported 00:07:59.176 Set Features (09h): Supported 00:07:59.176 Get Features (0Ah): Supported 00:07:59.176 Asynchronous Event Request (0Ch): Supported 00:07:59.176 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:59.176 Directive Send (19h): Supported 00:07:59.176 Directive Receive (1Ah): Supported 00:07:59.176 Virtualization Management (1Ch): Supported 00:07:59.176 Doorbell Buffer Config (7Ch): Supported 00:07:59.176 Format NVM (80h): Supported LBA-Change 00:07:59.176 I/O Commands 00:07:59.176 ------------ 00:07:59.176 Flush (00h): Supported LBA-Change 00:07:59.176 Write (01h): Supported LBA-Change 00:07:59.176 Read (02h): Supported 00:07:59.176 Compare (05h): Supported 00:07:59.176 Write Zeroes (08h): Supported LBA-Change 00:07:59.176 Dataset Management (09h): Supported LBA-Change 00:07:59.176 Unknown (0Ch): Supported 00:07:59.176 Unknown (12h): Supported 00:07:59.176 Copy (19h): Supported LBA-Change 00:07:59.176 Unknown (1Dh): Supported LBA-Change 00:07:59.176 00:07:59.176 Error Log 00:07:59.176 ========= 00:07:59.176 00:07:59.176 Arbitration 00:07:59.176 =========== 00:07:59.176 Arbitration Burst: no limit 00:07:59.176 00:07:59.176 Power Management 00:07:59.176 ================ 00:07:59.176 Number of Power States: 1 00:07:59.176 Current Power State: Power State #0 00:07:59.176 Power State #0: 00:07:59.176 Max Power: 25.00 W 00:07:59.176 Non-Operational State: Operational 00:07:59.176 Entry Latency: 16 microseconds 00:07:59.176 Exit Latency: 4 microseconds 00:07:59.176 Relative Read Throughput: 0 00:07:59.176 Relative Read Latency: 0 00:07:59.176 Relative Write Throughput: 0 00:07:59.176 Relative Write Latency: 0 00:07:59.177 Idle Power: Not Reported 00:07:59.177 Active Power: Not Reported 00:07:59.177 Non-Operational Permissive Mode: Not Supported 00:07:59.177 00:07:59.177 Health Information 00:07:59.177 ================== 00:07:59.177 Critical Warnings: 00:07:59.177 Available Spare Space: OK 00:07:59.177 Temperature: OK 00:07:59.177 Device Reliability: OK 00:07:59.177 Read Only: No 00:07:59.177 Volatile Memory Backup: OK 00:07:59.177 Current Temperature: 323 Kelvin (50 Celsius) 00:07:59.177 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:59.177 Available Spare: 0% 00:07:59.177 Available Spare Threshold: 0% 00:07:59.177 Life Percentage Used: 0% 00:07:59.177 Data Units Read: 732 00:07:59.177 Data Units Written: 662 00:07:59.177 Host Read Commands: 33340 00:07:59.177 Host Write Commands: 32764 00:07:59.177 Controller Busy Time: 0 minutes 00:07:59.177 Power Cycles: 0 00:07:59.177 Power On Hours: 0 hours 00:07:59.177 Unsafe Shutdowns: 0 00:07:59.177 Unrecoverable Media Errors: 0 00:07:59.177 Lifetime Error Log Entries: 0 00:07:59.177 Warning Temperature Time: 0 minutes 00:07:59.177 Critical Temperature Time: 0 minutes 00:07:59.177 00:07:59.177 Number of Queues 00:07:59.177 ================ 00:07:59.177 Number of I/O Submission Queues: 64 00:07:59.177 Number of I/O Completion Queues: 64 00:07:59.177 00:07:59.177 ZNS Specific Controller Data 00:07:59.177 ============================ 00:07:59.177 Zone Append Size Limit: 0 00:07:59.177 00:07:59.177 00:07:59.177 Active Namespaces 00:07:59.177 ================= 00:07:59.177 Namespace ID:1 00:07:59.177 Error Recovery Timeout: Unlimited 00:07:59.177 Command Set Identifier: NVM (00h) 00:07:59.177 Deallocate: Supported 00:07:59.177 Deallocated/Unwritten Error: Supported 00:07:59.177 Deallocated Read Value: All 0x00 00:07:59.177 Deallocate in Write Zeroes: Not Supported 00:07:59.177 Deallocated Guard Field: 0xFFFF 00:07:59.177 Flush: Supported 00:07:59.177 Reservation: Not Supported 00:07:59.177 Namespace Sharing Capabilities: Multiple Controllers 00:07:59.177 Size (in LBAs): 262144 (1GiB) 00:07:59.177 Capacity (in LBAs): 262144 (1GiB) 00:07:59.177 Utilization (in LBAs): 262144 (1GiB) 00:07:59.177 Thin Provisioning: Not Supported 00:07:59.177 Per-NS Atomic Units: No 00:07:59.177 Maximum Single Source Range Length: 128 00:07:59.177 Maximum Copy Length: 128 00:07:59.177 Maximum Source Range Count: 128 00:07:59.177 NGUID/EUI64 Never Reused: No 00:07:59.177 Namespace Write Protected: No 00:07:59.177 Endurance group ID: 1 00:07:59.177 Number of LBA Formats: 8 00:07:59.177 Current LBA Format: LBA Format #04 00:07:59.177 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.177 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.177 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.177 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.177 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.177 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.177 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.177 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.177 00:07:59.177 Get Feature FDP: 00:07:59.177 ================ 00:07:59.177 Enabled: Yes 00:07:59.177 FDP configuration index: 0 00:07:59.177 00:07:59.177 FDP configurations log page 00:07:59.177 =========================== 00:07:59.177 Number of FDP configurations: 1 00:07:59.177 Version: 0 00:07:59.177 Size: 112 00:07:59.177 FDP Configuration Descriptor: 0 00:07:59.177 Descriptor Size: 96 00:07:59.177 Reclaim Group Identifier format: 2 00:07:59.177 FDP Volatile Write Cache: Not Present 00:07:59.177 FDP Configuration: Valid 00:07:59.177 Vendor Specific Size: 0 00:07:59.177 Number of Reclaim Groups: 2 00:07:59.177 Number of Recalim Unit Handles: 8 00:07:59.177 Max Placement Identifiers: 128 00:07:59.177 Number of Namespaces Suppprted: 256 00:07:59.177 Reclaim unit Nominal Size: 6000000 bytes 00:07:59.177 Estimated Reclaim Unit Time Limit: Not Reported 00:07:59.177 RUH Desc #000: RUH Type: Initially Isolated 00:07:59.177 RUH Desc #001: RUH Type: Initially Isolated 00:07:59.177 RUH Desc #002: RUH Type: Initially Isolated 00:07:59.177 RUH Desc #003: RUH Type: Initially Isolated 00:07:59.177 RUH Desc #004: RUH Type: Initially Isolated 00:07:59.177 RUH Desc #005: RUH Type: Initially Isolated 00:07:59.177 RUH Desc #006: RUH Type: Initially Isolated 00:07:59.177 RUH Desc #007: RUH Type: Initially Isolated 00:07:59.177 00:07:59.177 FDP reclaim unit handle usage log page 00:07:59.177 ====================================== 00:07:59.177 Number of Reclaim Unit Handles: 8 00:07:59.177 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:59.177 RUH Usage Desc #001: RUH Attributes: Unused 00:07:59.177 RUH Usage Desc #002: RUH Attributes: Unused 00:07:59.177 RUH Usage Desc #003: RUH Attributes: Unused 00:07:59.177 RUH Usage Desc #004: RUH Attributes: Unused 00:07:59.177 RUH Usage Desc #005: RUH Attributes: Unused 00:07:59.177 RUH Usage Desc #006: RUH Attributes: Unused 00:07:59.177 RUH Usage Desc #007: RUH Attributes: Unused 00:07:59.177 00:07:59.177 FDP statistics log page 00:07:59.177 ======================= 00:07:59.177 Host bytes with metadata written: 422617088 00:07:59.177 Media bytes with metadata written: 422641664 00:07:59.177 Media bytes erased: 0 00:07:59.177 00:07:59.177 FDP events log page 00:07:59.177 =================== 00:07:59.177 Number of FDP events: 0 00:07:59.177 00:07:59.177 NVM Specific Namespace Data 00:07:59.177 =========================== 00:07:59.177 Logical Block Storage Tag Mask: 0 00:07:59.177 Protection Information Capabilities: 00:07:59.177 16b Guard Protection Information Storage Tag Support: No 00:07:59.177 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.177 Storage Tag Check Read Support: No 00:07:59.177 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.177 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.177 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.177 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.177 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.177 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.177 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.177 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.177 00:07:59.177 real 0m1.423s 00:07:59.177 user 0m0.542s 00:07:59.177 sys 0m0.661s 00:07:59.177 12:49:50 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.177 12:49:50 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:59.177 ************************************ 00:07:59.177 END TEST nvme_identify 00:07:59.177 ************************************ 00:07:59.177 12:49:50 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:59.177 12:49:50 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.177 12:49:50 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.177 12:49:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.177 ************************************ 00:07:59.177 START TEST nvme_perf 00:07:59.177 ************************************ 00:07:59.177 12:49:50 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:07:59.177 12:49:50 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:59.177 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:00.555 Initializing NVMe Controllers 00:08:00.555 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.555 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.555 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.555 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.555 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.555 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.555 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.555 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.555 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.555 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.555 Initialization complete. Launching workers. 00:08:00.555 ======================================================== 00:08:00.555 Latency(us) 00:08:00.555 Device Information : IOPS MiB/s Average min max 00:08:00.555 PCIE (0000:00:10.0) NSID 1 from core 0: 13145.84 154.05 9737.31 5453.96 31685.82 00:08:00.555 PCIE (0000:00:11.0) NSID 1 from core 0: 13145.84 154.05 9726.35 5289.91 30627.87 00:08:00.555 PCIE (0000:00:13.0) NSID 1 from core 0: 13145.84 154.05 9713.01 4487.10 30227.34 00:08:00.555 PCIE (0000:00:12.0) NSID 1 from core 0: 13145.84 154.05 9699.68 4114.81 29208.72 00:08:00.555 PCIE (0000:00:12.0) NSID 2 from core 0: 13145.84 154.05 9686.24 3822.80 28333.71 00:08:00.555 PCIE (0000:00:12.0) NSID 3 from core 0: 13145.84 154.05 9672.42 3362.71 27398.58 00:08:00.555 ======================================================== 00:08:00.555 Total : 78875.03 924.32 9705.84 3362.71 31685.82 00:08:00.555 00:08:00.555 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:00.555 ================================================================================= 00:08:00.555 1.00000% : 7983.476us 00:08:00.555 10.00000% : 8400.524us 00:08:00.555 25.00000% : 8817.571us 00:08:00.555 50.00000% : 9592.087us 00:08:00.555 75.00000% : 10307.025us 00:08:00.555 90.00000% : 10783.651us 00:08:00.555 95.00000% : 11021.964us 00:08:00.555 98.00000% : 11617.745us 00:08:00.555 99.00000% : 14656.233us 00:08:00.555 99.50000% : 24784.524us 00:08:00.555 99.90000% : 31457.280us 00:08:00.555 99.99000% : 31695.593us 00:08:00.555 99.99900% : 31695.593us 00:08:00.555 99.99990% : 31695.593us 00:08:00.555 99.99999% : 31695.593us 00:08:00.555 00:08:00.555 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:00.555 ================================================================================= 00:08:00.555 1.00000% : 8043.055us 00:08:00.555 10.00000% : 8460.102us 00:08:00.555 25.00000% : 8817.571us 00:08:00.555 50.00000% : 9651.665us 00:08:00.555 75.00000% : 10307.025us 00:08:00.555 90.00000% : 10724.073us 00:08:00.555 95.00000% : 10962.385us 00:08:00.555 98.00000% : 11677.324us 00:08:00.555 99.00000% : 14715.811us 00:08:00.555 99.50000% : 24784.524us 00:08:00.555 99.90000% : 30384.873us 00:08:00.555 99.99000% : 30742.342us 00:08:00.555 99.99900% : 30742.342us 00:08:00.555 99.99990% : 30742.342us 00:08:00.555 99.99999% : 30742.342us 00:08:00.555 00:08:00.555 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:00.555 ================================================================================= 00:08:00.555 1.00000% : 7864.320us 00:08:00.555 10.00000% : 8460.102us 00:08:00.555 25.00000% : 8817.571us 00:08:00.555 50.00000% : 9592.087us 00:08:00.555 75.00000% : 10307.025us 00:08:00.555 90.00000% : 10724.073us 00:08:00.555 95.00000% : 10962.385us 00:08:00.555 98.00000% : 11856.058us 00:08:00.555 99.00000% : 14954.124us 00:08:00.555 99.50000% : 24307.898us 00:08:00.555 99.90000% : 30027.404us 00:08:00.555 99.99000% : 30265.716us 00:08:00.555 99.99900% : 30265.716us 00:08:00.555 99.99990% : 30265.716us 00:08:00.555 99.99999% : 30265.716us 00:08:00.555 00:08:00.555 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:00.555 ================================================================================= 00:08:00.555 1.00000% : 7983.476us 00:08:00.555 10.00000% : 8460.102us 00:08:00.555 25.00000% : 8817.571us 00:08:00.555 50.00000% : 9592.087us 00:08:00.555 75.00000% : 10307.025us 00:08:00.555 90.00000% : 10724.073us 00:08:00.555 95.00000% : 10962.385us 00:08:00.555 98.00000% : 11617.745us 00:08:00.555 99.00000% : 14537.076us 00:08:00.555 99.50000% : 23592.960us 00:08:00.555 99.90000% : 28954.996us 00:08:00.555 99.99000% : 29193.309us 00:08:00.555 99.99900% : 29312.465us 00:08:00.555 99.99990% : 29312.465us 00:08:00.555 99.99999% : 29312.465us 00:08:00.555 00:08:00.555 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:00.555 ================================================================================= 00:08:00.555 1.00000% : 7923.898us 00:08:00.555 10.00000% : 8460.102us 00:08:00.555 25.00000% : 8817.571us 00:08:00.555 50.00000% : 9651.665us 00:08:00.556 75.00000% : 10307.025us 00:08:00.556 90.00000% : 10724.073us 00:08:00.556 95.00000% : 10962.385us 00:08:00.556 98.00000% : 11439.011us 00:08:00.556 99.00000% : 14060.451us 00:08:00.556 99.50000% : 22639.709us 00:08:00.556 99.90000% : 28120.902us 00:08:00.556 99.99000% : 28359.215us 00:08:00.556 99.99900% : 28359.215us 00:08:00.556 99.99990% : 28359.215us 00:08:00.556 99.99999% : 28359.215us 00:08:00.556 00:08:00.556 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:00.556 ================================================================================= 00:08:00.556 1.00000% : 7923.898us 00:08:00.556 10.00000% : 8460.102us 00:08:00.556 25.00000% : 8817.571us 00:08:00.556 50.00000% : 9592.087us 00:08:00.556 75.00000% : 10307.025us 00:08:00.556 90.00000% : 10724.073us 00:08:00.556 95.00000% : 10962.385us 00:08:00.556 98.00000% : 11498.589us 00:08:00.556 99.00000% : 14358.342us 00:08:00.556 99.50000% : 21686.458us 00:08:00.556 99.90000% : 27167.651us 00:08:00.556 99.99000% : 27405.964us 00:08:00.556 99.99900% : 27405.964us 00:08:00.556 99.99990% : 27405.964us 00:08:00.556 99.99999% : 27405.964us 00:08:00.556 00:08:00.556 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:00.556 ============================================================================== 00:08:00.556 Range in us Cumulative IO count 00:08:00.556 5451.404 - 5481.193: 0.0303% ( 4) 00:08:00.556 5481.193 - 5510.982: 0.0455% ( 2) 00:08:00.556 5510.982 - 5540.771: 0.0607% ( 2) 00:08:00.556 5540.771 - 5570.560: 0.0758% ( 2) 00:08:00.556 5570.560 - 5600.349: 0.0986% ( 3) 00:08:00.556 5600.349 - 5630.138: 0.1138% ( 2) 00:08:00.556 5630.138 - 5659.927: 0.1289% ( 2) 00:08:00.556 5659.927 - 5689.716: 0.1517% ( 3) 00:08:00.556 5689.716 - 5719.505: 0.1669% ( 2) 00:08:00.556 5719.505 - 5749.295: 0.1820% ( 2) 00:08:00.556 5749.295 - 5779.084: 0.1972% ( 2) 00:08:00.556 5779.084 - 5808.873: 0.2200% ( 3) 00:08:00.556 5808.873 - 5838.662: 0.2351% ( 2) 00:08:00.556 5838.662 - 5868.451: 0.2579% ( 3) 00:08:00.556 5868.451 - 5898.240: 0.2655% ( 1) 00:08:00.556 5898.240 - 5928.029: 0.2882% ( 3) 00:08:00.556 5928.029 - 5957.818: 0.2958% ( 1) 00:08:00.556 5957.818 - 5987.607: 0.3110% ( 2) 00:08:00.556 5987.607 - 6017.396: 0.3186% ( 1) 00:08:00.556 6017.396 - 6047.185: 0.3337% ( 2) 00:08:00.556 6047.185 - 6076.975: 0.3565% ( 3) 00:08:00.556 6076.975 - 6106.764: 0.3717% ( 2) 00:08:00.556 6106.764 - 6136.553: 0.3944% ( 3) 00:08:00.556 6136.553 - 6166.342: 0.4020% ( 1) 00:08:00.556 6166.342 - 6196.131: 0.4248% ( 3) 00:08:00.556 6196.131 - 6225.920: 0.4399% ( 2) 00:08:00.556 6225.920 - 6255.709: 0.4551% ( 2) 00:08:00.556 6255.709 - 6285.498: 0.4703% ( 2) 00:08:00.556 6285.498 - 6315.287: 0.4854% ( 2) 00:08:00.556 7745.164 - 7804.742: 0.5158% ( 4) 00:08:00.556 7804.742 - 7864.320: 0.5613% ( 6) 00:08:00.556 7864.320 - 7923.898: 0.8116% ( 33) 00:08:00.556 7923.898 - 7983.476: 1.1757% ( 48) 00:08:00.556 7983.476 - 8043.055: 1.6535% ( 63) 00:08:00.556 8043.055 - 8102.633: 2.3893% ( 97) 00:08:00.556 8102.633 - 8162.211: 3.3677% ( 129) 00:08:00.556 8162.211 - 8221.789: 4.6875% ( 174) 00:08:00.556 8221.789 - 8281.367: 6.2652% ( 208) 00:08:00.556 8281.367 - 8340.945: 8.0325% ( 233) 00:08:00.556 8340.945 - 8400.524: 10.0273% ( 263) 00:08:00.556 8400.524 - 8460.102: 12.1359% ( 278) 00:08:00.556 8460.102 - 8519.680: 14.3507% ( 292) 00:08:00.556 8519.680 - 8579.258: 16.5731% ( 293) 00:08:00.556 8579.258 - 8638.836: 18.8714% ( 303) 00:08:00.556 8638.836 - 8698.415: 21.1393% ( 299) 00:08:00.556 8698.415 - 8757.993: 23.5058% ( 312) 00:08:00.556 8757.993 - 8817.571: 25.7737% ( 299) 00:08:00.556 8817.571 - 8877.149: 28.1629% ( 315) 00:08:00.556 8877.149 - 8936.727: 30.6129% ( 323) 00:08:00.556 8936.727 - 8996.305: 32.9187% ( 304) 00:08:00.556 8996.305 - 9055.884: 35.1183% ( 290) 00:08:00.556 9055.884 - 9115.462: 37.2649% ( 283) 00:08:00.556 9115.462 - 9175.040: 39.2900% ( 267) 00:08:00.556 9175.040 - 9234.618: 41.1256% ( 242) 00:08:00.556 9234.618 - 9294.196: 42.6805% ( 205) 00:08:00.556 9294.196 - 9353.775: 44.1975% ( 200) 00:08:00.556 9353.775 - 9413.353: 45.7145% ( 200) 00:08:00.556 9413.353 - 9472.931: 47.1632% ( 191) 00:08:00.556 9472.931 - 9532.509: 48.6347% ( 194) 00:08:00.556 9532.509 - 9592.087: 50.2048% ( 207) 00:08:00.556 9592.087 - 9651.665: 51.7142% ( 199) 00:08:00.556 9651.665 - 9711.244: 53.4967% ( 235) 00:08:00.556 9711.244 - 9770.822: 55.3777% ( 248) 00:08:00.556 9770.822 - 9830.400: 57.3195% ( 256) 00:08:00.556 9830.400 - 9889.978: 59.3674% ( 270) 00:08:00.556 9889.978 - 9949.556: 61.4988% ( 281) 00:08:00.556 9949.556 - 10009.135: 63.6605% ( 285) 00:08:00.556 10009.135 - 10068.713: 65.9056% ( 296) 00:08:00.556 10068.713 - 10128.291: 68.3177% ( 318) 00:08:00.556 10128.291 - 10187.869: 70.7069% ( 315) 00:08:00.556 10187.869 - 10247.447: 72.9824% ( 300) 00:08:00.556 10247.447 - 10307.025: 75.2882% ( 304) 00:08:00.556 10307.025 - 10366.604: 77.6320% ( 309) 00:08:00.556 10366.604 - 10426.182: 79.8240% ( 289) 00:08:00.556 10426.182 - 10485.760: 81.8796% ( 271) 00:08:00.556 10485.760 - 10545.338: 83.8289% ( 257) 00:08:00.556 10545.338 - 10604.916: 85.7479% ( 253) 00:08:00.556 10604.916 - 10664.495: 87.5758% ( 241) 00:08:00.556 10664.495 - 10724.073: 89.3356% ( 232) 00:08:00.556 10724.073 - 10783.651: 90.7995% ( 193) 00:08:00.556 10783.651 - 10843.229: 92.2330% ( 189) 00:08:00.556 10843.229 - 10902.807: 93.4921% ( 166) 00:08:00.556 10902.807 - 10962.385: 94.4933% ( 132) 00:08:00.556 10962.385 - 11021.964: 95.3580% ( 114) 00:08:00.556 11021.964 - 11081.542: 95.9724% ( 81) 00:08:00.556 11081.542 - 11141.120: 96.4806% ( 67) 00:08:00.556 11141.120 - 11200.698: 96.8447% ( 48) 00:08:00.556 11200.698 - 11260.276: 97.1936% ( 46) 00:08:00.556 11260.276 - 11319.855: 97.5197% ( 43) 00:08:00.556 11319.855 - 11379.433: 97.6866% ( 22) 00:08:00.556 11379.433 - 11439.011: 97.8079% ( 16) 00:08:00.556 11439.011 - 11498.589: 97.9141% ( 14) 00:08:00.556 11498.589 - 11558.167: 97.9748% ( 8) 00:08:00.556 11558.167 - 11617.745: 98.0127% ( 5) 00:08:00.556 11617.745 - 11677.324: 98.0431% ( 4) 00:08:00.556 11677.324 - 11736.902: 98.0507% ( 1) 00:08:00.556 11736.902 - 11796.480: 98.0583% ( 1) 00:08:00.556 12094.371 - 12153.949: 98.0886% ( 4) 00:08:00.556 12153.949 - 12213.527: 98.0962% ( 1) 00:08:00.556 12213.527 - 12273.105: 98.1265% ( 4) 00:08:00.556 12273.105 - 12332.684: 98.1644% ( 5) 00:08:00.556 12332.684 - 12392.262: 98.1948% ( 4) 00:08:00.556 12392.262 - 12451.840: 98.2175% ( 3) 00:08:00.556 12451.840 - 12511.418: 98.2327% ( 2) 00:08:00.556 12511.418 - 12570.996: 98.2555% ( 3) 00:08:00.556 12570.996 - 12630.575: 98.2934% ( 5) 00:08:00.556 12630.575 - 12690.153: 98.3161% ( 3) 00:08:00.556 12690.153 - 12749.731: 98.3389% ( 3) 00:08:00.556 12749.731 - 12809.309: 98.3692% ( 4) 00:08:00.556 12809.309 - 12868.887: 98.3844% ( 2) 00:08:00.556 12868.887 - 12928.465: 98.4147% ( 4) 00:08:00.556 12928.465 - 12988.044: 98.4375% ( 3) 00:08:00.556 12988.044 - 13047.622: 98.4678% ( 4) 00:08:00.556 13047.622 - 13107.200: 98.4906% ( 3) 00:08:00.556 13107.200 - 13166.778: 98.5133% ( 3) 00:08:00.556 13166.778 - 13226.356: 98.5209% ( 1) 00:08:00.556 13226.356 - 13285.935: 98.5437% ( 3) 00:08:00.556 13524.247 - 13583.825: 98.5589% ( 2) 00:08:00.556 13583.825 - 13643.404: 98.5892% ( 4) 00:08:00.556 13643.404 - 13702.982: 98.6044% ( 2) 00:08:00.556 13702.982 - 13762.560: 98.6271% ( 3) 00:08:00.556 13762.560 - 13822.138: 98.6575% ( 4) 00:08:00.556 13822.138 - 13881.716: 98.6802% ( 3) 00:08:00.556 13881.716 - 13941.295: 98.7030% ( 3) 00:08:00.556 13941.295 - 14000.873: 98.7257% ( 3) 00:08:00.556 14000.873 - 14060.451: 98.7485% ( 3) 00:08:00.556 14060.451 - 14120.029: 98.7712% ( 3) 00:08:00.556 14120.029 - 14179.607: 98.7940% ( 3) 00:08:00.556 14179.607 - 14239.185: 98.8243% ( 4) 00:08:00.556 14239.185 - 14298.764: 98.8471% ( 3) 00:08:00.556 14298.764 - 14358.342: 98.8774% ( 4) 00:08:00.556 14358.342 - 14417.920: 98.9002% ( 3) 00:08:00.556 14417.920 - 14477.498: 98.9381% ( 5) 00:08:00.556 14477.498 - 14537.076: 98.9609% ( 3) 00:08:00.556 14537.076 - 14596.655: 98.9836% ( 3) 00:08:00.556 14596.655 - 14656.233: 99.0064% ( 3) 00:08:00.556 14656.233 - 14715.811: 99.0291% ( 3) 00:08:00.556 22997.178 - 23116.335: 99.0367% ( 1) 00:08:00.556 23116.335 - 23235.491: 99.0671% ( 4) 00:08:00.556 23235.491 - 23354.647: 99.0974% ( 4) 00:08:00.556 23354.647 - 23473.804: 99.1201% ( 3) 00:08:00.556 23473.804 - 23592.960: 99.1581% ( 5) 00:08:00.556 23592.960 - 23712.116: 99.1884% ( 4) 00:08:00.556 23712.116 - 23831.273: 99.2339% ( 6) 00:08:00.556 23831.273 - 23950.429: 99.2643% ( 4) 00:08:00.556 23950.429 - 24069.585: 99.3022% ( 5) 00:08:00.556 24069.585 - 24188.742: 99.3401% ( 5) 00:08:00.556 24188.742 - 24307.898: 99.3780% ( 5) 00:08:00.556 24307.898 - 24427.055: 99.4084% ( 4) 00:08:00.556 24427.055 - 24546.211: 99.4463% ( 5) 00:08:00.556 24546.211 - 24665.367: 99.4766% ( 4) 00:08:00.556 24665.367 - 24784.524: 99.5070% ( 4) 00:08:00.556 24784.524 - 24903.680: 99.5146% ( 1) 00:08:00.556 29908.247 - 30027.404: 99.5221% ( 1) 00:08:00.556 30027.404 - 30146.560: 99.5525% ( 4) 00:08:00.556 30146.560 - 30265.716: 99.5828% ( 4) 00:08:00.556 30265.716 - 30384.873: 99.6132% ( 4) 00:08:00.556 30384.873 - 30504.029: 99.6435% ( 4) 00:08:00.556 30504.029 - 30742.342: 99.7118% ( 9) 00:08:00.556 30742.342 - 30980.655: 99.7876% ( 10) 00:08:00.556 30980.655 - 31218.967: 99.8711% ( 11) 00:08:00.556 31218.967 - 31457.280: 99.9469% ( 10) 00:08:00.556 31457.280 - 31695.593: 100.0000% ( 7) 00:08:00.556 00:08:00.556 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:00.556 ============================================================================== 00:08:00.556 Range in us Cumulative IO count 00:08:00.557 5272.669 - 5302.458: 0.0076% ( 1) 00:08:00.557 5302.458 - 5332.247: 0.0379% ( 4) 00:08:00.557 5332.247 - 5362.036: 0.0607% ( 3) 00:08:00.557 5362.036 - 5391.825: 0.0758% ( 2) 00:08:00.557 5391.825 - 5421.615: 0.0986% ( 3) 00:08:00.557 5421.615 - 5451.404: 0.1214% ( 3) 00:08:00.557 5451.404 - 5481.193: 0.1517% ( 4) 00:08:00.557 5481.193 - 5510.982: 0.1820% ( 4) 00:08:00.557 5510.982 - 5540.771: 0.1972% ( 2) 00:08:00.557 5540.771 - 5570.560: 0.2200% ( 3) 00:08:00.557 5570.560 - 5600.349: 0.2427% ( 3) 00:08:00.557 5600.349 - 5630.138: 0.2579% ( 2) 00:08:00.557 5630.138 - 5659.927: 0.2806% ( 3) 00:08:00.557 5659.927 - 5689.716: 0.3034% ( 3) 00:08:00.557 5689.716 - 5719.505: 0.3186% ( 2) 00:08:00.557 5719.505 - 5749.295: 0.3413% ( 3) 00:08:00.557 5749.295 - 5779.084: 0.3641% ( 3) 00:08:00.557 5779.084 - 5808.873: 0.3792% ( 2) 00:08:00.557 5808.873 - 5838.662: 0.4020% ( 3) 00:08:00.557 5838.662 - 5868.451: 0.4248% ( 3) 00:08:00.557 5868.451 - 5898.240: 0.4399% ( 2) 00:08:00.557 5898.240 - 5928.029: 0.4627% ( 3) 00:08:00.557 5928.029 - 5957.818: 0.4854% ( 3) 00:08:00.557 7804.742 - 7864.320: 0.4930% ( 1) 00:08:00.557 7864.320 - 7923.898: 0.5385% ( 6) 00:08:00.557 7923.898 - 7983.476: 0.7585% ( 29) 00:08:00.557 7983.476 - 8043.055: 1.0998% ( 45) 00:08:00.557 8043.055 - 8102.633: 1.6232% ( 69) 00:08:00.557 8102.633 - 8162.211: 2.3968% ( 102) 00:08:00.557 8162.211 - 8221.789: 3.4132% ( 134) 00:08:00.557 8221.789 - 8281.367: 4.7709% ( 179) 00:08:00.557 8281.367 - 8340.945: 6.4851% ( 226) 00:08:00.557 8340.945 - 8400.524: 8.4648% ( 261) 00:08:00.557 8400.524 - 8460.102: 10.5431% ( 274) 00:08:00.557 8460.102 - 8519.680: 12.9399% ( 316) 00:08:00.557 8519.680 - 8579.258: 15.4885% ( 336) 00:08:00.557 8579.258 - 8638.836: 18.0977% ( 344) 00:08:00.557 8638.836 - 8698.415: 20.8207% ( 359) 00:08:00.557 8698.415 - 8757.993: 23.5589% ( 361) 00:08:00.557 8757.993 - 8817.571: 26.2743% ( 358) 00:08:00.557 8817.571 - 8877.149: 29.0124% ( 361) 00:08:00.557 8877.149 - 8936.727: 31.6444% ( 347) 00:08:00.557 8936.727 - 8996.305: 34.0944% ( 323) 00:08:00.557 8996.305 - 9055.884: 36.4457% ( 310) 00:08:00.557 9055.884 - 9115.462: 38.5922% ( 283) 00:08:00.557 9115.462 - 9175.040: 40.4733% ( 248) 00:08:00.557 9175.040 - 9234.618: 42.0206% ( 204) 00:08:00.557 9234.618 - 9294.196: 43.3480% ( 175) 00:08:00.557 9294.196 - 9353.775: 44.6526% ( 172) 00:08:00.557 9353.775 - 9413.353: 45.8890% ( 163) 00:08:00.557 9413.353 - 9472.931: 47.1405% ( 165) 00:08:00.557 9472.931 - 9532.509: 48.4223% ( 169) 00:08:00.557 9532.509 - 9592.087: 49.7421% ( 174) 00:08:00.557 9592.087 - 9651.665: 51.1681% ( 188) 00:08:00.557 9651.665 - 9711.244: 52.6396% ( 194) 00:08:00.557 9711.244 - 9770.822: 54.3386% ( 224) 00:08:00.557 9770.822 - 9830.400: 56.1666% ( 241) 00:08:00.557 9830.400 - 9889.978: 58.2979% ( 281) 00:08:00.557 9889.978 - 9949.556: 60.6417% ( 309) 00:08:00.557 9949.556 - 10009.135: 63.1144% ( 326) 00:08:00.557 10009.135 - 10068.713: 65.6478% ( 334) 00:08:00.557 10068.713 - 10128.291: 68.2797% ( 347) 00:08:00.557 10128.291 - 10187.869: 70.9648% ( 354) 00:08:00.557 10187.869 - 10247.447: 73.5058% ( 335) 00:08:00.557 10247.447 - 10307.025: 76.0695% ( 338) 00:08:00.557 10307.025 - 10366.604: 78.5346% ( 325) 00:08:00.557 10366.604 - 10426.182: 80.8783% ( 309) 00:08:00.557 10426.182 - 10485.760: 83.2145% ( 308) 00:08:00.557 10485.760 - 10545.338: 85.3990% ( 288) 00:08:00.557 10545.338 - 10604.916: 87.4848% ( 275) 00:08:00.557 10604.916 - 10664.495: 89.3735% ( 249) 00:08:00.557 10664.495 - 10724.073: 91.0118% ( 216) 00:08:00.557 10724.073 - 10783.651: 92.4909% ( 195) 00:08:00.557 10783.651 - 10843.229: 93.7121% ( 161) 00:08:00.557 10843.229 - 10902.807: 94.6223% ( 120) 00:08:00.557 10902.807 - 10962.385: 95.3959% ( 102) 00:08:00.557 10962.385 - 11021.964: 95.9496% ( 73) 00:08:00.557 11021.964 - 11081.542: 96.4502% ( 66) 00:08:00.557 11081.542 - 11141.120: 96.8750% ( 56) 00:08:00.557 11141.120 - 11200.698: 97.1936% ( 42) 00:08:00.557 11200.698 - 11260.276: 97.4135% ( 29) 00:08:00.557 11260.276 - 11319.855: 97.5956% ( 24) 00:08:00.557 11319.855 - 11379.433: 97.7397% ( 19) 00:08:00.557 11379.433 - 11439.011: 97.8231% ( 11) 00:08:00.557 11439.011 - 11498.589: 97.8990% ( 10) 00:08:00.557 11498.589 - 11558.167: 97.9369% ( 5) 00:08:00.557 11558.167 - 11617.745: 97.9900% ( 7) 00:08:00.557 11617.745 - 11677.324: 98.0203% ( 4) 00:08:00.557 11677.324 - 11736.902: 98.0507% ( 4) 00:08:00.557 11736.902 - 11796.480: 98.0658% ( 2) 00:08:00.557 11796.480 - 11856.058: 98.0962% ( 4) 00:08:00.557 11856.058 - 11915.636: 98.1265% ( 4) 00:08:00.557 11915.636 - 11975.215: 98.1493% ( 3) 00:08:00.557 11975.215 - 12034.793: 98.1796% ( 4) 00:08:00.557 12034.793 - 12094.371: 98.2100% ( 4) 00:08:00.557 12094.371 - 12153.949: 98.2403% ( 4) 00:08:00.557 12153.949 - 12213.527: 98.2782% ( 5) 00:08:00.557 12213.527 - 12273.105: 98.3086% ( 4) 00:08:00.557 12273.105 - 12332.684: 98.3389% ( 4) 00:08:00.557 12332.684 - 12392.262: 98.3692% ( 4) 00:08:00.557 12392.262 - 12451.840: 98.3996% ( 4) 00:08:00.557 12451.840 - 12511.418: 98.4299% ( 4) 00:08:00.557 12511.418 - 12570.996: 98.4678% ( 5) 00:08:00.557 12570.996 - 12630.575: 98.4906% ( 3) 00:08:00.557 12630.575 - 12690.153: 98.5209% ( 4) 00:08:00.557 12690.153 - 12749.731: 98.5361% ( 2) 00:08:00.557 12749.731 - 12809.309: 98.5437% ( 1) 00:08:00.557 13762.560 - 13822.138: 98.5664% ( 3) 00:08:00.557 13822.138 - 13881.716: 98.5892% ( 3) 00:08:00.557 13881.716 - 13941.295: 98.6195% ( 4) 00:08:00.557 13941.295 - 14000.873: 98.6423% ( 3) 00:08:00.557 14000.873 - 14060.451: 98.6726% ( 4) 00:08:00.557 14060.451 - 14120.029: 98.7030% ( 4) 00:08:00.557 14120.029 - 14179.607: 98.7333% ( 4) 00:08:00.557 14179.607 - 14239.185: 98.7637% ( 4) 00:08:00.557 14239.185 - 14298.764: 98.8016% ( 5) 00:08:00.557 14298.764 - 14358.342: 98.8319% ( 4) 00:08:00.557 14358.342 - 14417.920: 98.8623% ( 4) 00:08:00.557 14417.920 - 14477.498: 98.8926% ( 4) 00:08:00.557 14477.498 - 14537.076: 98.9229% ( 4) 00:08:00.557 14537.076 - 14596.655: 98.9533% ( 4) 00:08:00.557 14596.655 - 14656.233: 98.9912% ( 5) 00:08:00.557 14656.233 - 14715.811: 99.0140% ( 3) 00:08:00.557 14715.811 - 14775.389: 99.0291% ( 2) 00:08:00.557 23235.491 - 23354.647: 99.0595% ( 4) 00:08:00.557 23354.647 - 23473.804: 99.0898% ( 4) 00:08:00.557 23473.804 - 23592.960: 99.1277% ( 5) 00:08:00.557 23592.960 - 23712.116: 99.1732% ( 6) 00:08:00.557 23712.116 - 23831.273: 99.1960% ( 3) 00:08:00.557 23831.273 - 23950.429: 99.2415% ( 6) 00:08:00.557 23950.429 - 24069.585: 99.2794% ( 5) 00:08:00.557 24069.585 - 24188.742: 99.3098% ( 4) 00:08:00.557 24188.742 - 24307.898: 99.3477% ( 5) 00:08:00.557 24307.898 - 24427.055: 99.3856% ( 5) 00:08:00.557 24427.055 - 24546.211: 99.4311% ( 6) 00:08:00.557 24546.211 - 24665.367: 99.4766% ( 6) 00:08:00.557 24665.367 - 24784.524: 99.5146% ( 5) 00:08:00.557 29193.309 - 29312.465: 99.5449% ( 4) 00:08:00.557 29312.465 - 29431.622: 99.5828% ( 5) 00:08:00.557 29431.622 - 29550.778: 99.6208% ( 5) 00:08:00.557 29550.778 - 29669.935: 99.6587% ( 5) 00:08:00.557 29669.935 - 29789.091: 99.6966% ( 5) 00:08:00.557 29789.091 - 29908.247: 99.7421% ( 6) 00:08:00.557 29908.247 - 30027.404: 99.7876% ( 6) 00:08:00.557 30027.404 - 30146.560: 99.8255% ( 5) 00:08:00.557 30146.560 - 30265.716: 99.8635% ( 5) 00:08:00.557 30265.716 - 30384.873: 99.9090% ( 6) 00:08:00.557 30384.873 - 30504.029: 99.9545% ( 6) 00:08:00.557 30504.029 - 30742.342: 100.0000% ( 6) 00:08:00.557 00:08:00.557 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:00.557 ============================================================================== 00:08:00.557 Range in us Cumulative IO count 00:08:00.557 4468.364 - 4498.153: 0.0228% ( 3) 00:08:00.557 4498.153 - 4527.942: 0.0379% ( 2) 00:08:00.557 4527.942 - 4557.731: 0.0607% ( 3) 00:08:00.557 4557.731 - 4587.520: 0.0834% ( 3) 00:08:00.557 4587.520 - 4617.309: 0.1062% ( 3) 00:08:00.557 4617.309 - 4647.098: 0.1214% ( 2) 00:08:00.557 4647.098 - 4676.887: 0.1441% ( 3) 00:08:00.557 4676.887 - 4706.676: 0.1745% ( 4) 00:08:00.557 4706.676 - 4736.465: 0.1972% ( 3) 00:08:00.557 4736.465 - 4766.255: 0.2124% ( 2) 00:08:00.557 4766.255 - 4796.044: 0.2351% ( 3) 00:08:00.557 4796.044 - 4825.833: 0.2579% ( 3) 00:08:00.557 4825.833 - 4855.622: 0.2806% ( 3) 00:08:00.557 4855.622 - 4885.411: 0.2958% ( 2) 00:08:00.557 4885.411 - 4915.200: 0.3186% ( 3) 00:08:00.557 4915.200 - 4944.989: 0.3413% ( 3) 00:08:00.557 4944.989 - 4974.778: 0.3641% ( 3) 00:08:00.557 4974.778 - 5004.567: 0.3868% ( 3) 00:08:00.557 5004.567 - 5034.356: 0.4096% ( 3) 00:08:00.557 5034.356 - 5064.145: 0.4248% ( 2) 00:08:00.557 5064.145 - 5093.935: 0.4475% ( 3) 00:08:00.557 5093.935 - 5123.724: 0.4703% ( 3) 00:08:00.557 5123.724 - 5153.513: 0.4854% ( 2) 00:08:00.557 7119.593 - 7149.382: 0.4930% ( 1) 00:08:00.557 7149.382 - 7179.171: 0.5158% ( 3) 00:08:00.557 7179.171 - 7208.960: 0.5309% ( 2) 00:08:00.557 7208.960 - 7238.749: 0.5537% ( 3) 00:08:00.557 7238.749 - 7268.538: 0.5840% ( 4) 00:08:00.557 7268.538 - 7298.327: 0.6068% ( 3) 00:08:00.557 7298.327 - 7328.116: 0.6220% ( 2) 00:08:00.557 7328.116 - 7357.905: 0.6447% ( 3) 00:08:00.557 7357.905 - 7387.695: 0.6675% ( 3) 00:08:00.557 7387.695 - 7417.484: 0.6902% ( 3) 00:08:00.557 7417.484 - 7447.273: 0.7054% ( 2) 00:08:00.557 7447.273 - 7477.062: 0.7282% ( 3) 00:08:00.557 7477.062 - 7506.851: 0.7509% ( 3) 00:08:00.557 7506.851 - 7536.640: 0.7661% ( 2) 00:08:00.557 7536.640 - 7566.429: 0.7888% ( 3) 00:08:00.557 7566.429 - 7596.218: 0.8116% ( 3) 00:08:00.557 7596.218 - 7626.007: 0.8343% ( 3) 00:08:00.557 7626.007 - 7685.585: 0.8723% ( 5) 00:08:00.557 7685.585 - 7745.164: 0.9102% ( 5) 00:08:00.558 7745.164 - 7804.742: 0.9633% ( 7) 00:08:00.558 7804.742 - 7864.320: 1.0164% ( 7) 00:08:00.558 7864.320 - 7923.898: 1.0846% ( 9) 00:08:00.558 7923.898 - 7983.476: 1.2667% ( 24) 00:08:00.558 7983.476 - 8043.055: 1.6004% ( 44) 00:08:00.558 8043.055 - 8102.633: 2.1086% ( 67) 00:08:00.558 8102.633 - 8162.211: 2.7230% ( 81) 00:08:00.558 8162.211 - 8221.789: 3.6787% ( 126) 00:08:00.558 8221.789 - 8281.367: 5.0592% ( 182) 00:08:00.558 8281.367 - 8340.945: 6.6672% ( 212) 00:08:00.558 8340.945 - 8400.524: 8.6089% ( 256) 00:08:00.558 8400.524 - 8460.102: 10.8541% ( 296) 00:08:00.558 8460.102 - 8519.680: 13.2130% ( 311) 00:08:00.558 8519.680 - 8579.258: 15.7236% ( 331) 00:08:00.558 8579.258 - 8638.836: 18.3556% ( 347) 00:08:00.558 8638.836 - 8698.415: 21.0331% ( 353) 00:08:00.558 8698.415 - 8757.993: 23.6726% ( 348) 00:08:00.558 8757.993 - 8817.571: 26.4411% ( 365) 00:08:00.558 8817.571 - 8877.149: 29.1186% ( 353) 00:08:00.558 8877.149 - 8936.727: 31.6672% ( 336) 00:08:00.558 8936.727 - 8996.305: 34.0944% ( 320) 00:08:00.558 8996.305 - 9055.884: 36.5367% ( 322) 00:08:00.558 9055.884 - 9115.462: 38.7363% ( 290) 00:08:00.558 9115.462 - 9175.040: 40.5947% ( 245) 00:08:00.558 9175.040 - 9234.618: 42.1875% ( 210) 00:08:00.558 9234.618 - 9294.196: 43.6514% ( 193) 00:08:00.558 9294.196 - 9353.775: 45.0091% ( 179) 00:08:00.558 9353.775 - 9413.353: 46.3820% ( 181) 00:08:00.558 9413.353 - 9472.931: 47.7093% ( 175) 00:08:00.558 9472.931 - 9532.509: 49.0671% ( 179) 00:08:00.558 9532.509 - 9592.087: 50.4475% ( 182) 00:08:00.558 9592.087 - 9651.665: 51.8811% ( 189) 00:08:00.558 9651.665 - 9711.244: 53.3298% ( 191) 00:08:00.558 9711.244 - 9770.822: 54.8999% ( 207) 00:08:00.558 9770.822 - 9830.400: 56.8265% ( 254) 00:08:00.558 9830.400 - 9889.978: 58.8441% ( 266) 00:08:00.558 9889.978 - 9949.556: 61.1271% ( 301) 00:08:00.558 9949.556 - 10009.135: 63.4860% ( 311) 00:08:00.558 10009.135 - 10068.713: 65.9208% ( 321) 00:08:00.558 10068.713 - 10128.291: 68.3859% ( 325) 00:08:00.558 10128.291 - 10187.869: 70.9421% ( 337) 00:08:00.558 10187.869 - 10247.447: 73.5513% ( 344) 00:08:00.558 10247.447 - 10307.025: 75.9329% ( 314) 00:08:00.558 10307.025 - 10366.604: 78.3070% ( 313) 00:08:00.558 10366.604 - 10426.182: 80.6887% ( 314) 00:08:00.558 10426.182 - 10485.760: 82.9794% ( 302) 00:08:00.558 10485.760 - 10545.338: 85.0880% ( 278) 00:08:00.558 10545.338 - 10604.916: 87.0601% ( 260) 00:08:00.558 10604.916 - 10664.495: 88.8350% ( 234) 00:08:00.558 10664.495 - 10724.073: 90.5340% ( 224) 00:08:00.558 10724.073 - 10783.651: 91.9827% ( 191) 00:08:00.558 10783.651 - 10843.229: 93.2797% ( 171) 00:08:00.558 10843.229 - 10902.807: 94.3265% ( 138) 00:08:00.558 10902.807 - 10962.385: 95.1077% ( 103) 00:08:00.558 10962.385 - 11021.964: 95.6766% ( 75) 00:08:00.558 11021.964 - 11081.542: 96.1393% ( 61) 00:08:00.558 11081.542 - 11141.120: 96.5868% ( 59) 00:08:00.558 11141.120 - 11200.698: 96.9205% ( 44) 00:08:00.558 11200.698 - 11260.276: 97.2163% ( 39) 00:08:00.558 11260.276 - 11319.855: 97.4059% ( 25) 00:08:00.558 11319.855 - 11379.433: 97.5349% ( 17) 00:08:00.558 11379.433 - 11439.011: 97.6183% ( 11) 00:08:00.558 11439.011 - 11498.589: 97.6866% ( 9) 00:08:00.558 11498.589 - 11558.167: 97.7397% ( 7) 00:08:00.558 11558.167 - 11617.745: 97.8004% ( 8) 00:08:00.558 11617.745 - 11677.324: 97.8535% ( 7) 00:08:00.558 11677.324 - 11736.902: 97.9141% ( 8) 00:08:00.558 11736.902 - 11796.480: 97.9748% ( 8) 00:08:00.558 11796.480 - 11856.058: 98.0431% ( 9) 00:08:00.558 11856.058 - 11915.636: 98.0886% ( 6) 00:08:00.558 11915.636 - 11975.215: 98.1569% ( 9) 00:08:00.558 11975.215 - 12034.793: 98.2175% ( 8) 00:08:00.558 12034.793 - 12094.371: 98.2706% ( 7) 00:08:00.558 12094.371 - 12153.949: 98.3389% ( 9) 00:08:00.558 12153.949 - 12213.527: 98.3996% ( 8) 00:08:00.558 12213.527 - 12273.105: 98.4299% ( 4) 00:08:00.558 12273.105 - 12332.684: 98.4603% ( 4) 00:08:00.558 12332.684 - 12392.262: 98.4982% ( 5) 00:08:00.558 12392.262 - 12451.840: 98.5285% ( 4) 00:08:00.558 12451.840 - 12511.418: 98.5437% ( 2) 00:08:00.558 13941.295 - 14000.873: 98.5513% ( 1) 00:08:00.558 14000.873 - 14060.451: 98.5816% ( 4) 00:08:00.558 14060.451 - 14120.029: 98.6120% ( 4) 00:08:00.558 14120.029 - 14179.607: 98.6347% ( 3) 00:08:00.558 14179.607 - 14239.185: 98.6575% ( 3) 00:08:00.558 14239.185 - 14298.764: 98.6878% ( 4) 00:08:00.558 14298.764 - 14358.342: 98.7181% ( 4) 00:08:00.558 14358.342 - 14417.920: 98.7485% ( 4) 00:08:00.558 14417.920 - 14477.498: 98.7788% ( 4) 00:08:00.558 14477.498 - 14537.076: 98.8167% ( 5) 00:08:00.558 14537.076 - 14596.655: 98.8471% ( 4) 00:08:00.558 14596.655 - 14656.233: 98.8774% ( 4) 00:08:00.558 14656.233 - 14715.811: 98.9078% ( 4) 00:08:00.558 14715.811 - 14775.389: 98.9381% ( 4) 00:08:00.558 14775.389 - 14834.967: 98.9684% ( 4) 00:08:00.558 14834.967 - 14894.545: 98.9988% ( 4) 00:08:00.558 14894.545 - 14954.124: 99.0291% ( 4) 00:08:00.558 22758.865 - 22878.022: 99.0443% ( 2) 00:08:00.558 22878.022 - 22997.178: 99.0746% ( 4) 00:08:00.558 22997.178 - 23116.335: 99.1201% ( 6) 00:08:00.558 23116.335 - 23235.491: 99.1505% ( 4) 00:08:00.558 23235.491 - 23354.647: 99.1884% ( 5) 00:08:00.558 23354.647 - 23473.804: 99.2263% ( 5) 00:08:00.558 23473.804 - 23592.960: 99.2643% ( 5) 00:08:00.558 23592.960 - 23712.116: 99.3098% ( 6) 00:08:00.558 23712.116 - 23831.273: 99.3477% ( 5) 00:08:00.558 23831.273 - 23950.429: 99.3856% ( 5) 00:08:00.558 23950.429 - 24069.585: 99.4235% ( 5) 00:08:00.558 24069.585 - 24188.742: 99.4615% ( 5) 00:08:00.558 24188.742 - 24307.898: 99.5070% ( 6) 00:08:00.558 24307.898 - 24427.055: 99.5146% ( 1) 00:08:00.558 28716.684 - 28835.840: 99.5373% ( 3) 00:08:00.558 28835.840 - 28954.996: 99.5752% ( 5) 00:08:00.558 28954.996 - 29074.153: 99.6056% ( 4) 00:08:00.558 29074.153 - 29193.309: 99.6435% ( 5) 00:08:00.558 29193.309 - 29312.465: 99.6814% ( 5) 00:08:00.558 29312.465 - 29431.622: 99.7194% ( 5) 00:08:00.558 29431.622 - 29550.778: 99.7649% ( 6) 00:08:00.558 29550.778 - 29669.935: 99.8028% ( 5) 00:08:00.558 29669.935 - 29789.091: 99.8483% ( 6) 00:08:00.558 29789.091 - 29908.247: 99.8938% ( 6) 00:08:00.558 29908.247 - 30027.404: 99.9317% ( 5) 00:08:00.558 30027.404 - 30146.560: 99.9697% ( 5) 00:08:00.558 30146.560 - 30265.716: 100.0000% ( 4) 00:08:00.558 00:08:00.558 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:00.558 ============================================================================== 00:08:00.558 Range in us Cumulative IO count 00:08:00.558 4110.895 - 4140.684: 0.0228% ( 3) 00:08:00.558 4140.684 - 4170.473: 0.0455% ( 3) 00:08:00.558 4170.473 - 4200.262: 0.0607% ( 2) 00:08:00.558 4200.262 - 4230.051: 0.0758% ( 2) 00:08:00.558 4230.051 - 4259.840: 0.0986% ( 3) 00:08:00.558 4259.840 - 4289.629: 0.1138% ( 2) 00:08:00.558 4289.629 - 4319.418: 0.1289% ( 2) 00:08:00.558 4319.418 - 4349.207: 0.1517% ( 3) 00:08:00.558 4349.207 - 4378.996: 0.1745% ( 3) 00:08:00.558 4378.996 - 4408.785: 0.1896% ( 2) 00:08:00.558 4408.785 - 4438.575: 0.2124% ( 3) 00:08:00.558 4438.575 - 4468.364: 0.2351% ( 3) 00:08:00.558 4468.364 - 4498.153: 0.2579% ( 3) 00:08:00.558 4498.153 - 4527.942: 0.2731% ( 2) 00:08:00.558 4527.942 - 4557.731: 0.2958% ( 3) 00:08:00.558 4557.731 - 4587.520: 0.3186% ( 3) 00:08:00.558 4587.520 - 4617.309: 0.3413% ( 3) 00:08:00.558 4617.309 - 4647.098: 0.3641% ( 3) 00:08:00.558 4647.098 - 4676.887: 0.3792% ( 2) 00:08:00.558 4676.887 - 4706.676: 0.4020% ( 3) 00:08:00.558 4706.676 - 4736.465: 0.4248% ( 3) 00:08:00.558 4736.465 - 4766.255: 0.4475% ( 3) 00:08:00.558 4766.255 - 4796.044: 0.4703% ( 3) 00:08:00.558 4796.044 - 4825.833: 0.4854% ( 2) 00:08:00.558 6732.335 - 6762.124: 0.5082% ( 3) 00:08:00.558 6762.124 - 6791.913: 0.5234% ( 2) 00:08:00.558 6791.913 - 6821.702: 0.5385% ( 2) 00:08:00.558 6821.702 - 6851.491: 0.5689% ( 4) 00:08:00.558 6851.491 - 6881.280: 0.5916% ( 3) 00:08:00.558 6881.280 - 6911.069: 0.6068% ( 2) 00:08:00.558 6911.069 - 6940.858: 0.6296% ( 3) 00:08:00.558 6940.858 - 6970.647: 0.6523% ( 3) 00:08:00.558 6970.647 - 7000.436: 0.6751% ( 3) 00:08:00.558 7000.436 - 7030.225: 0.6978% ( 3) 00:08:00.558 7030.225 - 7060.015: 0.7130% ( 2) 00:08:00.558 7060.015 - 7089.804: 0.7282% ( 2) 00:08:00.558 7089.804 - 7119.593: 0.7509% ( 3) 00:08:00.558 7119.593 - 7149.382: 0.7585% ( 1) 00:08:00.558 7149.382 - 7179.171: 0.7812% ( 3) 00:08:00.558 7179.171 - 7208.960: 0.7964% ( 2) 00:08:00.558 7208.960 - 7238.749: 0.8192% ( 3) 00:08:00.558 7238.749 - 7268.538: 0.8419% ( 3) 00:08:00.558 7268.538 - 7298.327: 0.8647% ( 3) 00:08:00.558 7298.327 - 7328.116: 0.8799% ( 2) 00:08:00.558 7328.116 - 7357.905: 0.9026% ( 3) 00:08:00.558 7357.905 - 7387.695: 0.9254% ( 3) 00:08:00.558 7387.695 - 7417.484: 0.9481% ( 3) 00:08:00.558 7417.484 - 7447.273: 0.9709% ( 3) 00:08:00.558 7864.320 - 7923.898: 0.9936% ( 3) 00:08:00.558 7923.898 - 7983.476: 1.1908% ( 26) 00:08:00.558 7983.476 - 8043.055: 1.5170% ( 43) 00:08:00.558 8043.055 - 8102.633: 2.0252% ( 67) 00:08:00.558 8102.633 - 8162.211: 2.7837% ( 100) 00:08:00.558 8162.211 - 8221.789: 3.7242% ( 124) 00:08:00.558 8221.789 - 8281.367: 4.9833% ( 166) 00:08:00.558 8281.367 - 8340.945: 6.6065% ( 214) 00:08:00.558 8340.945 - 8400.524: 8.5634% ( 258) 00:08:00.558 8400.524 - 8460.102: 10.7175% ( 284) 00:08:00.558 8460.102 - 8519.680: 13.1068% ( 315) 00:08:00.558 8519.680 - 8579.258: 15.7539% ( 349) 00:08:00.558 8579.258 - 8638.836: 18.3632% ( 344) 00:08:00.558 8638.836 - 8698.415: 21.0027% ( 348) 00:08:00.558 8698.415 - 8757.993: 23.7106% ( 357) 00:08:00.558 8757.993 - 8817.571: 26.3956% ( 354) 00:08:00.558 8817.571 - 8877.149: 29.0655% ( 352) 00:08:00.558 8877.149 - 8936.727: 31.5837% ( 332) 00:08:00.558 8936.727 - 8996.305: 34.1550% ( 339) 00:08:00.559 8996.305 - 9055.884: 36.4912% ( 308) 00:08:00.559 9055.884 - 9115.462: 38.6453% ( 284) 00:08:00.559 9115.462 - 9175.040: 40.5567% ( 252) 00:08:00.559 9175.040 - 9234.618: 42.1647% ( 212) 00:08:00.559 9234.618 - 9294.196: 43.6286% ( 193) 00:08:00.559 9294.196 - 9353.775: 44.8726% ( 164) 00:08:00.559 9353.775 - 9413.353: 46.2758% ( 185) 00:08:00.559 9413.353 - 9472.931: 47.5046% ( 162) 00:08:00.559 9472.931 - 9532.509: 48.8395% ( 176) 00:08:00.559 9532.509 - 9592.087: 50.1214% ( 169) 00:08:00.559 9592.087 - 9651.665: 51.4867% ( 180) 00:08:00.559 9651.665 - 9711.244: 52.9505% ( 193) 00:08:00.559 9711.244 - 9770.822: 54.5510% ( 211) 00:08:00.559 9770.822 - 9830.400: 56.4320% ( 248) 00:08:00.559 9830.400 - 9889.978: 58.4041% ( 260) 00:08:00.559 9889.978 - 9949.556: 60.7630% ( 311) 00:08:00.559 9949.556 - 10009.135: 63.3343% ( 339) 00:08:00.559 10009.135 - 10068.713: 65.8905% ( 337) 00:08:00.559 10068.713 - 10128.291: 68.4618% ( 339) 00:08:00.559 10128.291 - 10187.869: 71.1468% ( 354) 00:08:00.559 10187.869 - 10247.447: 73.7409% ( 342) 00:08:00.559 10247.447 - 10307.025: 76.2060% ( 325) 00:08:00.559 10307.025 - 10366.604: 78.5953% ( 315) 00:08:00.559 10366.604 - 10426.182: 80.9542% ( 311) 00:08:00.559 10426.182 - 10485.760: 83.2145% ( 298) 00:08:00.559 10485.760 - 10545.338: 85.3914% ( 287) 00:08:00.559 10545.338 - 10604.916: 87.3635% ( 260) 00:08:00.559 10604.916 - 10664.495: 89.1687% ( 238) 00:08:00.559 10664.495 - 10724.073: 90.8450% ( 221) 00:08:00.559 10724.073 - 10783.651: 92.3164% ( 194) 00:08:00.559 10783.651 - 10843.229: 93.5983% ( 169) 00:08:00.559 10843.229 - 10902.807: 94.6147% ( 134) 00:08:00.559 10902.807 - 10962.385: 95.3808% ( 101) 00:08:00.559 10962.385 - 11021.964: 95.9800% ( 79) 00:08:00.559 11021.964 - 11081.542: 96.5109% ( 70) 00:08:00.559 11081.542 - 11141.120: 96.9205% ( 54) 00:08:00.559 11141.120 - 11200.698: 97.2239% ( 40) 00:08:00.559 11200.698 - 11260.276: 97.4363% ( 28) 00:08:00.559 11260.276 - 11319.855: 97.6107% ( 23) 00:08:00.559 11319.855 - 11379.433: 97.7321% ( 16) 00:08:00.559 11379.433 - 11439.011: 97.8307% ( 13) 00:08:00.559 11439.011 - 11498.589: 97.9293% ( 13) 00:08:00.559 11498.589 - 11558.167: 97.9976% ( 9) 00:08:00.559 11558.167 - 11617.745: 98.0507% ( 7) 00:08:00.559 11617.745 - 11677.324: 98.0583% ( 1) 00:08:00.559 12153.949 - 12213.527: 98.0734% ( 2) 00:08:00.559 12213.527 - 12273.105: 98.0962% ( 3) 00:08:00.559 12273.105 - 12332.684: 98.1265% ( 4) 00:08:00.559 12332.684 - 12392.262: 98.1644% ( 5) 00:08:00.559 12392.262 - 12451.840: 98.2024% ( 5) 00:08:00.559 12451.840 - 12511.418: 98.2327% ( 4) 00:08:00.559 12511.418 - 12570.996: 98.2630% ( 4) 00:08:00.559 12570.996 - 12630.575: 98.2858% ( 3) 00:08:00.559 12630.575 - 12690.153: 98.3086% ( 3) 00:08:00.559 12690.153 - 12749.731: 98.3389% ( 4) 00:08:00.559 12749.731 - 12809.309: 98.3768% ( 5) 00:08:00.559 12809.309 - 12868.887: 98.4072% ( 4) 00:08:00.559 12868.887 - 12928.465: 98.4375% ( 4) 00:08:00.559 12928.465 - 12988.044: 98.4754% ( 5) 00:08:00.559 12988.044 - 13047.622: 98.5058% ( 4) 00:08:00.559 13047.622 - 13107.200: 98.5361% ( 4) 00:08:00.559 13107.200 - 13166.778: 98.5437% ( 1) 00:08:00.559 13583.825 - 13643.404: 98.5589% ( 2) 00:08:00.559 13643.404 - 13702.982: 98.5968% ( 5) 00:08:00.559 13702.982 - 13762.560: 98.6271% ( 4) 00:08:00.559 13762.560 - 13822.138: 98.6499% ( 3) 00:08:00.559 13822.138 - 13881.716: 98.6878% ( 5) 00:08:00.559 13881.716 - 13941.295: 98.7181% ( 4) 00:08:00.559 13941.295 - 14000.873: 98.7485% ( 4) 00:08:00.559 14000.873 - 14060.451: 98.7788% ( 4) 00:08:00.559 14060.451 - 14120.029: 98.8092% ( 4) 00:08:00.559 14120.029 - 14179.607: 98.8471% ( 5) 00:08:00.559 14179.607 - 14239.185: 98.8774% ( 4) 00:08:00.559 14239.185 - 14298.764: 98.9078% ( 4) 00:08:00.559 14298.764 - 14358.342: 98.9381% ( 4) 00:08:00.559 14358.342 - 14417.920: 98.9684% ( 4) 00:08:00.559 14417.920 - 14477.498: 98.9912% ( 3) 00:08:00.559 14477.498 - 14537.076: 99.0140% ( 3) 00:08:00.559 14537.076 - 14596.655: 99.0291% ( 2) 00:08:00.559 21924.771 - 22043.927: 99.0519% ( 3) 00:08:00.559 22043.927 - 22163.084: 99.0822% ( 4) 00:08:00.559 22163.084 - 22282.240: 99.1126% ( 4) 00:08:00.559 22282.240 - 22401.396: 99.1581% ( 6) 00:08:00.559 22401.396 - 22520.553: 99.1884% ( 4) 00:08:00.559 22520.553 - 22639.709: 99.2339% ( 6) 00:08:00.559 22639.709 - 22758.865: 99.2718% ( 5) 00:08:00.559 22758.865 - 22878.022: 99.3022% ( 4) 00:08:00.559 22878.022 - 22997.178: 99.3401% ( 5) 00:08:00.559 22997.178 - 23116.335: 99.3856% ( 6) 00:08:00.559 23116.335 - 23235.491: 99.4160% ( 4) 00:08:00.559 23235.491 - 23354.647: 99.4615% ( 6) 00:08:00.559 23354.647 - 23473.804: 99.4994% ( 5) 00:08:00.559 23473.804 - 23592.960: 99.5146% ( 2) 00:08:00.559 27763.433 - 27882.589: 99.5297% ( 2) 00:08:00.559 27882.589 - 28001.745: 99.5677% ( 5) 00:08:00.559 28001.745 - 28120.902: 99.6132% ( 6) 00:08:00.559 28120.902 - 28240.058: 99.6511% ( 5) 00:08:00.559 28240.058 - 28359.215: 99.6966% ( 6) 00:08:00.559 28359.215 - 28478.371: 99.7345% ( 5) 00:08:00.559 28478.371 - 28597.527: 99.7800% ( 6) 00:08:00.559 28597.527 - 28716.684: 99.8180% ( 5) 00:08:00.559 28716.684 - 28835.840: 99.8635% ( 6) 00:08:00.559 28835.840 - 28954.996: 99.9014% ( 5) 00:08:00.559 28954.996 - 29074.153: 99.9469% ( 6) 00:08:00.559 29074.153 - 29193.309: 99.9924% ( 6) 00:08:00.559 29193.309 - 29312.465: 100.0000% ( 1) 00:08:00.559 00:08:00.559 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:00.559 ============================================================================== 00:08:00.559 Range in us Cumulative IO count 00:08:00.559 3813.004 - 3842.793: 0.0152% ( 2) 00:08:00.559 3842.793 - 3872.582: 0.0379% ( 3) 00:08:00.559 3872.582 - 3902.371: 0.0531% ( 2) 00:08:00.559 3902.371 - 3932.160: 0.0758% ( 3) 00:08:00.559 3932.160 - 3961.949: 0.0910% ( 2) 00:08:00.559 3961.949 - 3991.738: 0.1138% ( 3) 00:08:00.559 3991.738 - 4021.527: 0.1365% ( 3) 00:08:00.559 4021.527 - 4051.316: 0.1669% ( 4) 00:08:00.559 4051.316 - 4081.105: 0.1820% ( 2) 00:08:00.559 4081.105 - 4110.895: 0.2048% ( 3) 00:08:00.559 4110.895 - 4140.684: 0.2275% ( 3) 00:08:00.559 4140.684 - 4170.473: 0.2503% ( 3) 00:08:00.559 4170.473 - 4200.262: 0.2731% ( 3) 00:08:00.559 4200.262 - 4230.051: 0.2882% ( 2) 00:08:00.559 4230.051 - 4259.840: 0.3110% ( 3) 00:08:00.559 4259.840 - 4289.629: 0.3262% ( 2) 00:08:00.559 4289.629 - 4319.418: 0.3489% ( 3) 00:08:00.559 4319.418 - 4349.207: 0.3717% ( 3) 00:08:00.559 4349.207 - 4378.996: 0.3944% ( 3) 00:08:00.559 4378.996 - 4408.785: 0.4096% ( 2) 00:08:00.559 4408.785 - 4438.575: 0.4323% ( 3) 00:08:00.559 4438.575 - 4468.364: 0.4475% ( 2) 00:08:00.559 4468.364 - 4498.153: 0.4703% ( 3) 00:08:00.559 4498.153 - 4527.942: 0.4854% ( 2) 00:08:00.559 6345.076 - 6374.865: 0.5006% ( 2) 00:08:00.559 6374.865 - 6404.655: 0.5234% ( 3) 00:08:00.559 6404.655 - 6434.444: 0.5385% ( 2) 00:08:00.559 6434.444 - 6464.233: 0.5613% ( 3) 00:08:00.559 6464.233 - 6494.022: 0.5840% ( 3) 00:08:00.559 6494.022 - 6523.811: 0.6068% ( 3) 00:08:00.559 6523.811 - 6553.600: 0.6220% ( 2) 00:08:00.559 6553.600 - 6583.389: 0.6447% ( 3) 00:08:00.559 6583.389 - 6613.178: 0.6751% ( 4) 00:08:00.559 6613.178 - 6642.967: 0.6978% ( 3) 00:08:00.559 6642.967 - 6672.756: 0.7130% ( 2) 00:08:00.559 6672.756 - 6702.545: 0.7357% ( 3) 00:08:00.559 6702.545 - 6732.335: 0.7585% ( 3) 00:08:00.559 6732.335 - 6762.124: 0.7812% ( 3) 00:08:00.559 6762.124 - 6791.913: 0.8040% ( 3) 00:08:00.559 6791.913 - 6821.702: 0.8192% ( 2) 00:08:00.559 6821.702 - 6851.491: 0.8419% ( 3) 00:08:00.559 6851.491 - 6881.280: 0.8647% ( 3) 00:08:00.559 6881.280 - 6911.069: 0.8874% ( 3) 00:08:00.559 6911.069 - 6940.858: 0.9102% ( 3) 00:08:00.559 6940.858 - 6970.647: 0.9254% ( 2) 00:08:00.559 6970.647 - 7000.436: 0.9481% ( 3) 00:08:00.559 7000.436 - 7030.225: 0.9709% ( 3) 00:08:00.559 7864.320 - 7923.898: 1.0771% ( 14) 00:08:00.559 7923.898 - 7983.476: 1.3198% ( 32) 00:08:00.559 7983.476 - 8043.055: 1.6232% ( 40) 00:08:00.559 8043.055 - 8102.633: 2.1162% ( 65) 00:08:00.559 8102.633 - 8162.211: 2.7761% ( 87) 00:08:00.559 8162.211 - 8221.789: 3.7090% ( 123) 00:08:00.559 8221.789 - 8281.367: 5.0061% ( 171) 00:08:00.559 8281.367 - 8340.945: 6.6899% ( 222) 00:08:00.559 8340.945 - 8400.524: 8.5938% ( 251) 00:08:00.559 8400.524 - 8460.102: 10.7555% ( 285) 00:08:00.559 8460.102 - 8519.680: 13.1068% ( 310) 00:08:00.559 8519.680 - 8579.258: 15.7084% ( 343) 00:08:00.559 8579.258 - 8638.836: 18.3404% ( 347) 00:08:00.559 8638.836 - 8698.415: 21.0255% ( 354) 00:08:00.559 8698.415 - 8757.993: 23.7106% ( 354) 00:08:00.559 8757.993 - 8817.571: 26.4336% ( 359) 00:08:00.559 8817.571 - 8877.149: 29.1490% ( 358) 00:08:00.559 8877.149 - 8936.727: 31.6444% ( 329) 00:08:00.559 8936.727 - 8996.305: 34.1019% ( 324) 00:08:00.559 8996.305 - 9055.884: 36.4078% ( 304) 00:08:00.559 9055.884 - 9115.462: 38.5695% ( 285) 00:08:00.559 9115.462 - 9175.040: 40.4961% ( 254) 00:08:00.559 9175.040 - 9234.618: 42.0737% ( 208) 00:08:00.559 9234.618 - 9294.196: 43.5300% ( 192) 00:08:00.559 9294.196 - 9353.775: 44.8726% ( 177) 00:08:00.559 9353.775 - 9413.353: 46.0634% ( 157) 00:08:00.559 9413.353 - 9472.931: 47.3149% ( 165) 00:08:00.559 9472.931 - 9532.509: 48.5058% ( 157) 00:08:00.559 9532.509 - 9592.087: 49.8635% ( 179) 00:08:00.559 9592.087 - 9651.665: 51.3350% ( 194) 00:08:00.559 9651.665 - 9711.244: 52.8671% ( 202) 00:08:00.559 9711.244 - 9770.822: 54.4827% ( 213) 00:08:00.559 9770.822 - 9830.400: 56.3031% ( 240) 00:08:00.559 9830.400 - 9889.978: 58.4269% ( 280) 00:08:00.559 9889.978 - 9949.556: 60.7100% ( 301) 00:08:00.559 9949.556 - 10009.135: 63.1751% ( 325) 00:08:00.559 10009.135 - 10068.713: 65.7843% ( 344) 00:08:00.560 10068.713 - 10128.291: 68.4769% ( 355) 00:08:00.560 10128.291 - 10187.869: 71.1468% ( 352) 00:08:00.560 10187.869 - 10247.447: 73.8092% ( 351) 00:08:00.560 10247.447 - 10307.025: 76.2894% ( 327) 00:08:00.560 10307.025 - 10366.604: 78.7242% ( 321) 00:08:00.560 10366.604 - 10426.182: 81.0680% ( 309) 00:08:00.560 10426.182 - 10485.760: 83.2904% ( 293) 00:08:00.560 10485.760 - 10545.338: 85.4293% ( 282) 00:08:00.560 10545.338 - 10604.916: 87.5455% ( 279) 00:08:00.560 10604.916 - 10664.495: 89.4948% ( 257) 00:08:00.560 10664.495 - 10724.073: 91.1939% ( 224) 00:08:00.560 10724.073 - 10783.651: 92.6729% ( 195) 00:08:00.560 10783.651 - 10843.229: 93.8789% ( 159) 00:08:00.560 10843.229 - 10902.807: 94.8043% ( 122) 00:08:00.560 10902.807 - 10962.385: 95.6235% ( 108) 00:08:00.560 10962.385 - 11021.964: 96.2682% ( 85) 00:08:00.560 11021.964 - 11081.542: 96.8143% ( 72) 00:08:00.560 11081.542 - 11141.120: 97.1632% ( 46) 00:08:00.560 11141.120 - 11200.698: 97.4211% ( 34) 00:08:00.560 11200.698 - 11260.276: 97.6411% ( 29) 00:08:00.560 11260.276 - 11319.855: 97.8307% ( 25) 00:08:00.560 11319.855 - 11379.433: 97.9293% ( 13) 00:08:00.560 11379.433 - 11439.011: 98.0052% ( 10) 00:08:00.560 11439.011 - 11498.589: 98.0279% ( 3) 00:08:00.560 11498.589 - 11558.167: 98.0431% ( 2) 00:08:00.560 11558.167 - 11617.745: 98.0583% ( 2) 00:08:00.560 12749.731 - 12809.309: 98.0658% ( 1) 00:08:00.560 12809.309 - 12868.887: 98.0810% ( 2) 00:08:00.560 12868.887 - 12928.465: 98.1113% ( 4) 00:08:00.560 12928.465 - 12988.044: 98.1417% ( 4) 00:08:00.560 12988.044 - 13047.622: 98.1796% ( 5) 00:08:00.560 13047.622 - 13107.200: 98.2100% ( 4) 00:08:00.560 13107.200 - 13166.778: 98.2630% ( 7) 00:08:00.560 13166.778 - 13226.356: 98.3237% ( 8) 00:08:00.560 13226.356 - 13285.935: 98.3844% ( 8) 00:08:00.560 13285.935 - 13345.513: 98.4451% ( 8) 00:08:00.560 13345.513 - 13405.091: 98.5058% ( 8) 00:08:00.560 13405.091 - 13464.669: 98.5664% ( 8) 00:08:00.560 13464.669 - 13524.247: 98.6271% ( 8) 00:08:00.560 13524.247 - 13583.825: 98.6802% ( 7) 00:08:00.560 13583.825 - 13643.404: 98.7409% ( 8) 00:08:00.560 13643.404 - 13702.982: 98.7940% ( 7) 00:08:00.560 13702.982 - 13762.560: 98.8623% ( 9) 00:08:00.560 13762.560 - 13822.138: 98.9002% ( 5) 00:08:00.560 13822.138 - 13881.716: 98.9305% ( 4) 00:08:00.560 13881.716 - 13941.295: 98.9684% ( 5) 00:08:00.560 13941.295 - 14000.873: 98.9912% ( 3) 00:08:00.560 14000.873 - 14060.451: 99.0215% ( 4) 00:08:00.560 14060.451 - 14120.029: 99.0291% ( 1) 00:08:00.560 21090.676 - 21209.833: 99.0595% ( 4) 00:08:00.560 21209.833 - 21328.989: 99.0974% ( 5) 00:08:00.560 21328.989 - 21448.145: 99.1353% ( 5) 00:08:00.560 21448.145 - 21567.302: 99.1808% ( 6) 00:08:00.560 21567.302 - 21686.458: 99.2112% ( 4) 00:08:00.560 21686.458 - 21805.615: 99.2491% ( 5) 00:08:00.560 21805.615 - 21924.771: 99.2946% ( 6) 00:08:00.560 21924.771 - 22043.927: 99.3325% ( 5) 00:08:00.560 22043.927 - 22163.084: 99.3704% ( 5) 00:08:00.560 22163.084 - 22282.240: 99.4084% ( 5) 00:08:00.560 22282.240 - 22401.396: 99.4463% ( 5) 00:08:00.560 22401.396 - 22520.553: 99.4842% ( 5) 00:08:00.560 22520.553 - 22639.709: 99.5146% ( 4) 00:08:00.560 26810.182 - 26929.338: 99.5297% ( 2) 00:08:00.560 26929.338 - 27048.495: 99.5601% ( 4) 00:08:00.560 27048.495 - 27167.651: 99.6056% ( 6) 00:08:00.560 27167.651 - 27286.807: 99.6435% ( 5) 00:08:00.560 27286.807 - 27405.964: 99.6890% ( 6) 00:08:00.560 27405.964 - 27525.120: 99.7269% ( 5) 00:08:00.560 27525.120 - 27644.276: 99.7649% ( 5) 00:08:00.560 27644.276 - 27763.433: 99.8028% ( 5) 00:08:00.560 27763.433 - 27882.589: 99.8407% ( 5) 00:08:00.560 27882.589 - 28001.745: 99.8862% ( 6) 00:08:00.560 28001.745 - 28120.902: 99.9317% ( 6) 00:08:00.560 28120.902 - 28240.058: 99.9697% ( 5) 00:08:00.560 28240.058 - 28359.215: 100.0000% ( 4) 00:08:00.560 00:08:00.560 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:00.560 ============================================================================== 00:08:00.560 Range in us Cumulative IO count 00:08:00.560 3351.273 - 3366.167: 0.0076% ( 1) 00:08:00.560 3366.167 - 3381.062: 0.0228% ( 2) 00:08:00.560 3381.062 - 3395.956: 0.0379% ( 2) 00:08:00.560 3395.956 - 3410.851: 0.0455% ( 1) 00:08:00.560 3410.851 - 3425.745: 0.0531% ( 1) 00:08:00.560 3425.745 - 3440.640: 0.0607% ( 1) 00:08:00.560 3440.640 - 3455.535: 0.0758% ( 2) 00:08:00.560 3455.535 - 3470.429: 0.0834% ( 1) 00:08:00.560 3470.429 - 3485.324: 0.0910% ( 1) 00:08:00.560 3485.324 - 3500.218: 0.1062% ( 2) 00:08:00.560 3500.218 - 3515.113: 0.1138% ( 1) 00:08:00.560 3515.113 - 3530.007: 0.1214% ( 1) 00:08:00.560 3530.007 - 3544.902: 0.1365% ( 2) 00:08:00.560 3544.902 - 3559.796: 0.1441% ( 1) 00:08:00.560 3559.796 - 3574.691: 0.1593% ( 2) 00:08:00.560 3574.691 - 3589.585: 0.1745% ( 2) 00:08:00.560 3589.585 - 3604.480: 0.1820% ( 1) 00:08:00.560 3604.480 - 3619.375: 0.1972% ( 2) 00:08:00.560 3619.375 - 3634.269: 0.2048% ( 1) 00:08:00.560 3634.269 - 3649.164: 0.2124% ( 1) 00:08:00.560 3649.164 - 3664.058: 0.2275% ( 2) 00:08:00.560 3664.058 - 3678.953: 0.2351% ( 1) 00:08:00.560 3678.953 - 3693.847: 0.2503% ( 2) 00:08:00.560 3693.847 - 3708.742: 0.2579% ( 1) 00:08:00.560 3708.742 - 3723.636: 0.2655% ( 1) 00:08:00.560 3723.636 - 3738.531: 0.2806% ( 2) 00:08:00.560 3738.531 - 3753.425: 0.2882% ( 1) 00:08:00.560 3753.425 - 3768.320: 0.3034% ( 2) 00:08:00.560 3768.320 - 3783.215: 0.3110% ( 1) 00:08:00.560 3783.215 - 3798.109: 0.3186% ( 1) 00:08:00.560 3798.109 - 3813.004: 0.3337% ( 2) 00:08:00.560 3813.004 - 3842.793: 0.3413% ( 1) 00:08:00.560 3842.793 - 3872.582: 0.3641% ( 3) 00:08:00.560 3872.582 - 3902.371: 0.3868% ( 3) 00:08:00.560 3902.371 - 3932.160: 0.4096% ( 3) 00:08:00.560 3932.160 - 3961.949: 0.4248% ( 2) 00:08:00.560 3961.949 - 3991.738: 0.4399% ( 2) 00:08:00.560 3991.738 - 4021.527: 0.4551% ( 2) 00:08:00.560 4021.527 - 4051.316: 0.4779% ( 3) 00:08:00.560 4051.316 - 4081.105: 0.4854% ( 1) 00:08:00.560 5868.451 - 5898.240: 0.5158% ( 4) 00:08:00.560 5898.240 - 5928.029: 0.5385% ( 3) 00:08:00.560 5928.029 - 5957.818: 0.5537% ( 2) 00:08:00.560 5957.818 - 5987.607: 0.5689% ( 2) 00:08:00.560 5987.607 - 6017.396: 0.5916% ( 3) 00:08:00.560 6017.396 - 6047.185: 0.6144% ( 3) 00:08:00.560 6047.185 - 6076.975: 0.6371% ( 3) 00:08:00.560 6076.975 - 6106.764: 0.6675% ( 4) 00:08:00.560 6106.764 - 6136.553: 0.6826% ( 2) 00:08:00.560 6136.553 - 6166.342: 0.7054% ( 3) 00:08:00.560 6166.342 - 6196.131: 0.7282% ( 3) 00:08:00.560 6196.131 - 6225.920: 0.7509% ( 3) 00:08:00.560 6225.920 - 6255.709: 0.7661% ( 2) 00:08:00.560 6255.709 - 6285.498: 0.7888% ( 3) 00:08:00.560 6285.498 - 6315.287: 0.8116% ( 3) 00:08:00.560 6315.287 - 6345.076: 0.8268% ( 2) 00:08:00.560 6374.865 - 6404.655: 0.8495% ( 3) 00:08:00.560 6404.655 - 6434.444: 0.8723% ( 3) 00:08:00.560 6434.444 - 6464.233: 0.8950% ( 3) 00:08:00.560 6464.233 - 6494.022: 0.9102% ( 2) 00:08:00.560 6494.022 - 6523.811: 0.9329% ( 3) 00:08:00.560 6523.811 - 6553.600: 0.9481% ( 2) 00:08:00.560 6553.600 - 6583.389: 0.9709% ( 3) 00:08:00.560 7864.320 - 7923.898: 1.0391% ( 9) 00:08:00.560 7923.898 - 7983.476: 1.2288% ( 25) 00:08:00.560 7983.476 - 8043.055: 1.5397% ( 41) 00:08:00.560 8043.055 - 8102.633: 2.0176% ( 63) 00:08:00.560 8102.633 - 8162.211: 2.6320% ( 81) 00:08:00.560 8162.211 - 8221.789: 3.6256% ( 131) 00:08:00.560 8221.789 - 8281.367: 4.8999% ( 168) 00:08:00.560 8281.367 - 8340.945: 6.5610% ( 219) 00:08:00.560 8340.945 - 8400.524: 8.4724% ( 252) 00:08:00.560 8400.524 - 8460.102: 10.6265% ( 284) 00:08:00.560 8460.102 - 8519.680: 13.0840% ( 324) 00:08:00.560 8519.680 - 8579.258: 15.6174% ( 334) 00:08:00.560 8579.258 - 8638.836: 18.2570% ( 348) 00:08:00.560 8638.836 - 8698.415: 20.9800% ( 359) 00:08:00.560 8698.415 - 8757.993: 23.7181% ( 361) 00:08:00.560 8757.993 - 8817.571: 26.4260% ( 357) 00:08:00.560 8817.571 - 8877.149: 29.1110% ( 354) 00:08:00.560 8877.149 - 8936.727: 31.6672% ( 337) 00:08:00.560 8936.727 - 8996.305: 34.0868% ( 319) 00:08:00.560 8996.305 - 9055.884: 36.4684% ( 314) 00:08:00.560 9055.884 - 9115.462: 38.7060% ( 295) 00:08:00.560 9115.462 - 9175.040: 40.6402% ( 255) 00:08:00.560 9175.040 - 9234.618: 42.2178% ( 208) 00:08:00.561 9234.618 - 9294.196: 43.7424% ( 201) 00:08:00.561 9294.196 - 9353.775: 45.0622% ( 174) 00:08:00.561 9353.775 - 9413.353: 46.2985% ( 163) 00:08:00.561 9413.353 - 9472.931: 47.5349% ( 163) 00:08:00.561 9472.931 - 9532.509: 48.8547% ( 174) 00:08:00.561 9532.509 - 9592.087: 50.2048% ( 178) 00:08:00.561 9592.087 - 9651.665: 51.5928% ( 183) 00:08:00.561 9651.665 - 9711.244: 53.0264% ( 189) 00:08:00.561 9711.244 - 9770.822: 54.5661% ( 203) 00:08:00.561 9770.822 - 9830.400: 56.3562% ( 236) 00:08:00.561 9830.400 - 9889.978: 58.3359% ( 261) 00:08:00.561 9889.978 - 9949.556: 60.6569% ( 306) 00:08:00.561 9949.556 - 10009.135: 63.0765% ( 319) 00:08:00.561 10009.135 - 10068.713: 65.6402% ( 338) 00:08:00.561 10068.713 - 10128.291: 68.3328% ( 355) 00:08:00.561 10128.291 - 10187.869: 71.0482% ( 358) 00:08:00.561 10187.869 - 10247.447: 73.6271% ( 340) 00:08:00.561 10247.447 - 10307.025: 76.1226% ( 329) 00:08:00.561 10307.025 - 10366.604: 78.5498% ( 320) 00:08:00.561 10366.604 - 10426.182: 80.9314% ( 314) 00:08:00.561 10426.182 - 10485.760: 83.2069% ( 300) 00:08:00.561 10485.760 - 10545.338: 85.4672% ( 298) 00:08:00.561 10545.338 - 10604.916: 87.5683% ( 277) 00:08:00.561 10604.916 - 10664.495: 89.4797% ( 252) 00:08:00.561 10664.495 - 10724.073: 91.3304% ( 244) 00:08:00.561 10724.073 - 10783.651: 92.9005% ( 207) 00:08:00.561 10783.651 - 10843.229: 94.0610% ( 153) 00:08:00.561 10843.229 - 10902.807: 94.9712% ( 120) 00:08:00.561 10902.807 - 10962.385: 95.7676% ( 105) 00:08:00.561 10962.385 - 11021.964: 96.3744% ( 80) 00:08:00.561 11021.964 - 11081.542: 96.8750% ( 66) 00:08:00.561 11081.542 - 11141.120: 97.2163% ( 45) 00:08:00.561 11141.120 - 11200.698: 97.4590% ( 32) 00:08:00.561 11200.698 - 11260.276: 97.6638% ( 27) 00:08:00.561 11260.276 - 11319.855: 97.8459% ( 24) 00:08:00.561 11319.855 - 11379.433: 97.9369% ( 12) 00:08:00.561 11379.433 - 11439.011: 97.9976% ( 8) 00:08:00.561 11439.011 - 11498.589: 98.0355% ( 5) 00:08:00.561 11498.589 - 11558.167: 98.0583% ( 3) 00:08:00.561 12570.996 - 12630.575: 98.0658% ( 1) 00:08:00.561 12630.575 - 12690.153: 98.0962% ( 4) 00:08:00.561 12690.153 - 12749.731: 98.1189% ( 3) 00:08:00.561 12749.731 - 12809.309: 98.1493% ( 4) 00:08:00.561 12809.309 - 12868.887: 98.1796% ( 4) 00:08:00.561 12868.887 - 12928.465: 98.2100% ( 4) 00:08:00.561 12928.465 - 12988.044: 98.2403% ( 4) 00:08:00.561 12988.044 - 13047.622: 98.2706% ( 4) 00:08:00.561 13047.622 - 13107.200: 98.3086% ( 5) 00:08:00.561 13107.200 - 13166.778: 98.3389% ( 4) 00:08:00.561 13166.778 - 13226.356: 98.3692% ( 4) 00:08:00.561 13226.356 - 13285.935: 98.3996% ( 4) 00:08:00.561 13285.935 - 13345.513: 98.4299% ( 4) 00:08:00.561 13345.513 - 13405.091: 98.4678% ( 5) 00:08:00.561 13405.091 - 13464.669: 98.4982% ( 4) 00:08:00.561 13464.669 - 13524.247: 98.5513% ( 7) 00:08:00.561 13524.247 - 13583.825: 98.5892% ( 5) 00:08:00.561 13583.825 - 13643.404: 98.6499% ( 8) 00:08:00.561 13643.404 - 13702.982: 98.6802% ( 4) 00:08:00.561 13702.982 - 13762.560: 98.7030% ( 3) 00:08:00.561 13762.560 - 13822.138: 98.7409% ( 5) 00:08:00.561 13822.138 - 13881.716: 98.7712% ( 4) 00:08:00.561 13881.716 - 13941.295: 98.8016% ( 4) 00:08:00.561 13941.295 - 14000.873: 98.8319% ( 4) 00:08:00.561 14000.873 - 14060.451: 98.8623% ( 4) 00:08:00.561 14060.451 - 14120.029: 98.8926% ( 4) 00:08:00.561 14120.029 - 14179.607: 98.9305% ( 5) 00:08:00.561 14179.607 - 14239.185: 98.9609% ( 4) 00:08:00.561 14239.185 - 14298.764: 98.9912% ( 4) 00:08:00.561 14298.764 - 14358.342: 99.0215% ( 4) 00:08:00.561 14358.342 - 14417.920: 99.0291% ( 1) 00:08:00.561 20137.425 - 20256.582: 99.0519% ( 3) 00:08:00.561 20256.582 - 20375.738: 99.0974% ( 6) 00:08:00.561 20375.738 - 20494.895: 99.1353% ( 5) 00:08:00.561 20494.895 - 20614.051: 99.1732% ( 5) 00:08:00.561 20614.051 - 20733.207: 99.2112% ( 5) 00:08:00.561 20733.207 - 20852.364: 99.2491% ( 5) 00:08:00.561 20852.364 - 20971.520: 99.2870% ( 5) 00:08:00.561 20971.520 - 21090.676: 99.3249% ( 5) 00:08:00.561 21090.676 - 21209.833: 99.3704% ( 6) 00:08:00.561 21209.833 - 21328.989: 99.4084% ( 5) 00:08:00.561 21328.989 - 21448.145: 99.4463% ( 5) 00:08:00.561 21448.145 - 21567.302: 99.4918% ( 6) 00:08:00.561 21567.302 - 21686.458: 99.5146% ( 3) 00:08:00.561 25856.931 - 25976.087: 99.5297% ( 2) 00:08:00.561 25976.087 - 26095.244: 99.5677% ( 5) 00:08:00.561 26095.244 - 26214.400: 99.6056% ( 5) 00:08:00.561 26214.400 - 26333.556: 99.6435% ( 5) 00:08:00.561 26333.556 - 26452.713: 99.6890% ( 6) 00:08:00.561 26452.713 - 26571.869: 99.7269% ( 5) 00:08:00.561 26571.869 - 26691.025: 99.7649% ( 5) 00:08:00.561 26691.025 - 26810.182: 99.8028% ( 5) 00:08:00.561 26810.182 - 26929.338: 99.8407% ( 5) 00:08:00.561 26929.338 - 27048.495: 99.8786% ( 5) 00:08:00.561 27048.495 - 27167.651: 99.9166% ( 5) 00:08:00.561 27167.651 - 27286.807: 99.9621% ( 6) 00:08:00.561 27286.807 - 27405.964: 100.0000% ( 5) 00:08:00.561 00:08:00.561 12:49:51 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:00.561 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:01.938 Initializing NVMe Controllers 00:08:01.938 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:01.938 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:01.938 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.938 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:01.938 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:01.938 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:01.938 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:01.938 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:01.938 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:01.938 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:01.938 Initialization complete. Launching workers. 00:08:01.938 ======================================================== 00:08:01.938 Latency(us) 00:08:01.938 Device Information : IOPS MiB/s Average min max 00:08:01.938 PCIE (0000:00:10.0) NSID 1 from core 0: 12523.52 146.76 10224.40 6158.01 30718.46 00:08:01.938 PCIE (0000:00:11.0) NSID 1 from core 0: 12523.52 146.76 10214.83 5853.42 30138.77 00:08:01.938 PCIE (0000:00:13.0) NSID 1 from core 0: 12523.52 146.76 10204.47 5180.62 30441.58 00:08:01.938 PCIE (0000:00:12.0) NSID 1 from core 0: 12523.52 146.76 10193.96 4781.81 29859.75 00:08:01.938 PCIE (0000:00:12.0) NSID 2 from core 0: 12523.52 146.76 10184.60 4407.65 29386.40 00:08:01.938 PCIE (0000:00:12.0) NSID 3 from core 0: 12523.52 146.76 10174.45 4104.85 28801.27 00:08:01.938 ======================================================== 00:08:01.938 Total : 75141.14 880.56 10199.45 4104.85 30718.46 00:08:01.938 00:08:01.938 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:01.938 ================================================================================= 00:08:01.938 1.00000% : 8519.680us 00:08:01.938 10.00000% : 8817.571us 00:08:01.938 25.00000% : 9234.618us 00:08:01.938 50.00000% : 9770.822us 00:08:01.938 75.00000% : 10724.073us 00:08:01.938 90.00000% : 12034.793us 00:08:01.938 95.00000% : 13047.622us 00:08:01.938 98.00000% : 14298.764us 00:08:01.938 99.00000% : 21448.145us 00:08:01.938 99.50000% : 29074.153us 00:08:01.938 99.90000% : 30504.029us 00:08:01.938 99.99000% : 30742.342us 00:08:01.938 99.99900% : 30742.342us 00:08:01.938 99.99990% : 30742.342us 00:08:01.938 99.99999% : 30742.342us 00:08:01.938 00:08:01.938 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:01.938 ================================================================================= 00:08:01.938 1.00000% : 8579.258us 00:08:01.938 10.00000% : 8936.727us 00:08:01.938 25.00000% : 9234.618us 00:08:01.938 50.00000% : 9711.244us 00:08:01.938 75.00000% : 10724.073us 00:08:01.938 90.00000% : 11915.636us 00:08:01.938 95.00000% : 12809.309us 00:08:01.938 98.00000% : 14060.451us 00:08:01.938 99.00000% : 21924.771us 00:08:01.938 99.50000% : 28716.684us 00:08:01.938 99.90000% : 29908.247us 00:08:01.938 99.99000% : 30146.560us 00:08:01.938 99.99900% : 30146.560us 00:08:01.938 99.99990% : 30146.560us 00:08:01.938 99.99999% : 30146.560us 00:08:01.938 00:08:01.938 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:01.938 ================================================================================= 00:08:01.938 1.00000% : 8281.367us 00:08:01.938 10.00000% : 8936.727us 00:08:01.938 25.00000% : 9234.618us 00:08:01.938 50.00000% : 9711.244us 00:08:01.938 75.00000% : 10664.495us 00:08:01.938 90.00000% : 11915.636us 00:08:01.938 95.00000% : 12868.887us 00:08:01.938 98.00000% : 14120.029us 00:08:01.938 99.00000% : 22163.084us 00:08:01.938 99.50000% : 28954.996us 00:08:01.938 99.90000% : 30265.716us 00:08:01.938 99.99000% : 30504.029us 00:08:01.938 99.99900% : 30504.029us 00:08:01.938 99.99990% : 30504.029us 00:08:01.938 99.99999% : 30504.029us 00:08:01.938 00:08:01.938 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:01.938 ================================================================================= 00:08:01.938 1.00000% : 7923.898us 00:08:01.938 10.00000% : 8936.727us 00:08:01.938 25.00000% : 9234.618us 00:08:01.938 50.00000% : 9711.244us 00:08:01.938 75.00000% : 10604.916us 00:08:01.938 90.00000% : 11915.636us 00:08:01.938 95.00000% : 12868.887us 00:08:01.938 98.00000% : 14179.607us 00:08:01.938 99.00000% : 21805.615us 00:08:01.938 99.50000% : 28478.371us 00:08:01.938 99.90000% : 29669.935us 00:08:01.938 99.99000% : 29908.247us 00:08:01.938 99.99900% : 29908.247us 00:08:01.938 99.99990% : 29908.247us 00:08:01.938 99.99999% : 29908.247us 00:08:01.938 00:08:01.938 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:01.938 ================================================================================= 00:08:01.938 1.00000% : 7685.585us 00:08:01.938 10.00000% : 8936.727us 00:08:01.938 25.00000% : 9234.618us 00:08:01.938 50.00000% : 9711.244us 00:08:01.938 75.00000% : 10604.916us 00:08:01.938 90.00000% : 11975.215us 00:08:01.938 95.00000% : 12928.465us 00:08:01.938 98.00000% : 14179.607us 00:08:01.938 99.00000% : 21448.145us 00:08:01.938 99.50000% : 28001.745us 00:08:01.938 99.90000% : 29193.309us 00:08:01.938 99.99000% : 29431.622us 00:08:01.938 99.99900% : 29431.622us 00:08:01.938 99.99990% : 29431.622us 00:08:01.938 99.99999% : 29431.622us 00:08:01.938 00:08:01.938 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:01.938 ================================================================================= 00:08:01.938 1.00000% : 7417.484us 00:08:01.938 10.00000% : 8936.727us 00:08:01.938 25.00000% : 9234.618us 00:08:01.938 50.00000% : 9711.244us 00:08:01.938 75.00000% : 10604.916us 00:08:01.938 90.00000% : 11915.636us 00:08:01.938 95.00000% : 12868.887us 00:08:01.938 98.00000% : 14239.185us 00:08:01.938 99.00000% : 21090.676us 00:08:01.938 99.50000% : 27048.495us 00:08:01.938 99.90000% : 28597.527us 00:08:01.938 99.99000% : 28835.840us 00:08:01.938 99.99900% : 28835.840us 00:08:01.938 99.99990% : 28835.840us 00:08:01.938 99.99999% : 28835.840us 00:08:01.938 00:08:01.938 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:01.938 ============================================================================== 00:08:01.938 Range in us Cumulative IO count 00:08:01.938 6136.553 - 6166.342: 0.0080% ( 1) 00:08:01.938 6166.342 - 6196.131: 0.0399% ( 4) 00:08:01.938 6196.131 - 6225.920: 0.0717% ( 4) 00:08:01.938 6225.920 - 6255.709: 0.0957% ( 3) 00:08:01.938 6255.709 - 6285.498: 0.1196% ( 3) 00:08:01.938 6285.498 - 6315.287: 0.1355% ( 2) 00:08:01.938 6315.287 - 6345.076: 0.1435% ( 1) 00:08:01.938 6345.076 - 6374.865: 0.1594% ( 2) 00:08:01.938 6374.865 - 6404.655: 0.1674% ( 1) 00:08:01.938 6434.444 - 6464.233: 0.1913% ( 3) 00:08:01.938 6464.233 - 6494.022: 0.2073% ( 2) 00:08:01.938 6494.022 - 6523.811: 0.2152% ( 1) 00:08:01.938 6523.811 - 6553.600: 0.2392% ( 3) 00:08:01.938 6553.600 - 6583.389: 0.2551% ( 2) 00:08:01.938 6583.389 - 6613.178: 0.2631% ( 1) 00:08:01.938 6613.178 - 6642.967: 0.2790% ( 2) 00:08:01.938 6642.967 - 6672.756: 0.2950% ( 2) 00:08:01.938 6672.756 - 6702.545: 0.3029% ( 1) 00:08:01.938 6702.545 - 6732.335: 0.3189% ( 2) 00:08:01.938 6732.335 - 6762.124: 0.3348% ( 2) 00:08:01.938 6762.124 - 6791.913: 0.3508% ( 2) 00:08:01.938 6791.913 - 6821.702: 0.3667% ( 2) 00:08:01.938 6821.702 - 6851.491: 0.3827% ( 2) 00:08:01.938 6851.491 - 6881.280: 0.3906% ( 1) 00:08:01.938 6881.280 - 6911.069: 0.4066% ( 2) 00:08:01.938 6911.069 - 6940.858: 0.4225% ( 2) 00:08:01.938 6940.858 - 6970.647: 0.4305% ( 1) 00:08:01.938 6970.647 - 7000.436: 0.4544% ( 3) 00:08:01.938 7000.436 - 7030.225: 0.4624% ( 1) 00:08:01.938 7030.225 - 7060.015: 0.4703% ( 1) 00:08:01.938 7060.015 - 7089.804: 0.4863% ( 2) 00:08:01.938 7089.804 - 7119.593: 0.4943% ( 1) 00:08:01.938 7119.593 - 7149.382: 0.5102% ( 2) 00:08:01.938 8281.367 - 8340.945: 0.5261% ( 2) 00:08:01.938 8340.945 - 8400.524: 0.6457% ( 15) 00:08:01.938 8400.524 - 8460.102: 0.9885% ( 43) 00:08:01.938 8460.102 - 8519.680: 2.1763% ( 149) 00:08:01.938 8519.680 - 8579.258: 3.2765% ( 138) 00:08:01.938 8579.258 - 8638.836: 4.8948% ( 203) 00:08:01.938 8638.836 - 8698.415: 6.5529% ( 208) 00:08:01.938 8698.415 - 8757.993: 8.5220% ( 247) 00:08:01.938 8757.993 - 8817.571: 10.6346% ( 265) 00:08:01.938 8817.571 - 8877.149: 12.9464% ( 290) 00:08:01.938 8877.149 - 8936.727: 15.3141% ( 297) 00:08:01.938 8936.727 - 8996.305: 17.5143% ( 276) 00:08:01.938 8996.305 - 9055.884: 19.9936% ( 311) 00:08:01.938 9055.884 - 9115.462: 22.5048% ( 315) 00:08:01.938 9115.462 - 9175.040: 24.8645% ( 296) 00:08:01.938 9175.040 - 9234.618: 27.1365% ( 285) 00:08:01.938 9234.618 - 9294.196: 29.8071% ( 335) 00:08:01.938 9294.196 - 9353.775: 32.3182% ( 315) 00:08:01.938 9353.775 - 9413.353: 35.1244% ( 352) 00:08:01.938 9413.353 - 9472.931: 37.7471% ( 329) 00:08:01.938 9472.931 - 9532.509: 40.6649% ( 366) 00:08:01.938 9532.509 - 9592.087: 43.4471% ( 349) 00:08:01.938 9592.087 - 9651.665: 46.3010% ( 358) 00:08:01.938 9651.665 - 9711.244: 48.9955% ( 338) 00:08:01.938 9711.244 - 9770.822: 51.4190% ( 304) 00:08:01.938 9770.822 - 9830.400: 54.0497% ( 330) 00:08:01.938 9830.400 - 9889.978: 56.5450% ( 313) 00:08:01.938 9889.978 - 9949.556: 58.6336% ( 262) 00:08:01.938 9949.556 - 10009.135: 60.4273% ( 225) 00:08:01.938 10009.135 - 10068.713: 62.2050% ( 223) 00:08:01.938 10068.713 - 10128.291: 63.6001% ( 175) 00:08:01.938 10128.291 - 10187.869: 65.0670% ( 184) 00:08:01.938 10187.869 - 10247.447: 66.7570% ( 212) 00:08:01.938 10247.447 - 10307.025: 68.2637% ( 189) 00:08:01.938 10307.025 - 10366.604: 69.6189% ( 170) 00:08:01.938 10366.604 - 10426.182: 70.7430% ( 141) 00:08:01.939 10426.182 - 10485.760: 71.7714% ( 129) 00:08:01.939 10485.760 - 10545.338: 72.7041% ( 117) 00:08:01.939 10545.338 - 10604.916: 73.6607% ( 120) 00:08:01.939 10604.916 - 10664.495: 74.5376% ( 110) 00:08:01.939 10664.495 - 10724.073: 75.4464% ( 114) 00:08:01.939 10724.073 - 10783.651: 76.2994% ( 107) 00:08:01.939 10783.651 - 10843.229: 77.1445% ( 106) 00:08:01.939 10843.229 - 10902.807: 77.9895% ( 106) 00:08:01.939 10902.807 - 10962.385: 78.8186% ( 104) 00:08:01.939 10962.385 - 11021.964: 79.8469% ( 129) 00:08:01.939 11021.964 - 11081.542: 80.6760% ( 104) 00:08:01.939 11081.542 - 11141.120: 81.5768% ( 113) 00:08:01.939 11141.120 - 11200.698: 82.4139% ( 105) 00:08:01.939 11200.698 - 11260.276: 83.0517% ( 80) 00:08:01.939 11260.276 - 11319.855: 83.7771% ( 91) 00:08:01.939 11319.855 - 11379.433: 84.5344% ( 95) 00:08:01.939 11379.433 - 11439.011: 85.1164% ( 73) 00:08:01.939 11439.011 - 11498.589: 85.6744% ( 70) 00:08:01.939 11498.589 - 11558.167: 86.1926% ( 65) 00:08:01.939 11558.167 - 11617.745: 86.7188% ( 66) 00:08:01.939 11617.745 - 11677.324: 87.2688% ( 69) 00:08:01.939 11677.324 - 11736.902: 87.7551% ( 61) 00:08:01.939 11736.902 - 11796.480: 88.2573% ( 63) 00:08:01.939 11796.480 - 11856.058: 88.7197% ( 58) 00:08:01.939 11856.058 - 11915.636: 89.2937% ( 72) 00:08:01.939 11915.636 - 11975.215: 89.7321% ( 55) 00:08:01.939 11975.215 - 12034.793: 90.1706% ( 55) 00:08:01.939 12034.793 - 12094.371: 90.6888% ( 65) 00:08:01.939 12094.371 - 12153.949: 91.1591% ( 59) 00:08:01.939 12153.949 - 12213.527: 91.6215% ( 58) 00:08:01.939 12213.527 - 12273.105: 91.9882% ( 46) 00:08:01.939 12273.105 - 12332.684: 92.3230% ( 42) 00:08:01.939 12332.684 - 12392.262: 92.7216% ( 50) 00:08:01.939 12392.262 - 12451.840: 92.9767% ( 32) 00:08:01.939 12451.840 - 12511.418: 93.2637% ( 36) 00:08:01.939 12511.418 - 12570.996: 93.5268% ( 33) 00:08:01.939 12570.996 - 12630.575: 93.7739% ( 31) 00:08:01.939 12630.575 - 12690.153: 94.0051% ( 29) 00:08:01.939 12690.153 - 12749.731: 94.1885% ( 23) 00:08:01.939 12749.731 - 12809.309: 94.4037% ( 27) 00:08:01.939 12809.309 - 12868.887: 94.5711% ( 21) 00:08:01.939 12868.887 - 12928.465: 94.8182% ( 31) 00:08:01.939 12928.465 - 12988.044: 94.9936% ( 22) 00:08:01.939 12988.044 - 13047.622: 95.1531% ( 20) 00:08:01.939 13047.622 - 13107.200: 95.3125% ( 20) 00:08:01.939 13107.200 - 13166.778: 95.5038% ( 24) 00:08:01.939 13166.778 - 13226.356: 95.6952% ( 24) 00:08:01.939 13226.356 - 13285.935: 95.8626% ( 21) 00:08:01.939 13285.935 - 13345.513: 95.9821% ( 15) 00:08:01.939 13345.513 - 13405.091: 96.2054% ( 28) 00:08:01.939 13405.091 - 13464.669: 96.3170% ( 14) 00:08:01.939 13464.669 - 13524.247: 96.4047% ( 11) 00:08:01.939 13524.247 - 13583.825: 96.5561% ( 19) 00:08:01.939 13583.825 - 13643.404: 96.6996% ( 18) 00:08:01.939 13643.404 - 13702.982: 96.8431% ( 18) 00:08:01.939 13702.982 - 13762.560: 97.0265% ( 23) 00:08:01.939 13762.560 - 13822.138: 97.1301% ( 13) 00:08:01.939 13822.138 - 13881.716: 97.2258% ( 12) 00:08:01.939 13881.716 - 13941.295: 97.3693% ( 18) 00:08:01.939 13941.295 - 14000.873: 97.4809% ( 14) 00:08:01.939 14000.873 - 14060.451: 97.5925% ( 14) 00:08:01.939 14060.451 - 14120.029: 97.7280% ( 17) 00:08:01.939 14120.029 - 14179.607: 97.8635% ( 17) 00:08:01.939 14179.607 - 14239.185: 97.9512% ( 11) 00:08:01.939 14239.185 - 14298.764: 98.0548% ( 13) 00:08:01.939 14298.764 - 14358.342: 98.1346% ( 10) 00:08:01.939 14358.342 - 14417.920: 98.2223% ( 11) 00:08:01.939 14417.920 - 14477.498: 98.3099% ( 11) 00:08:01.939 14477.498 - 14537.076: 98.4056% ( 12) 00:08:01.939 14537.076 - 14596.655: 98.4933% ( 11) 00:08:01.939 14596.655 - 14656.233: 98.5810% ( 11) 00:08:01.939 14656.233 - 14715.811: 98.6448% ( 8) 00:08:01.939 14715.811 - 14775.389: 98.7165% ( 9) 00:08:01.939 14775.389 - 14834.967: 98.7564% ( 5) 00:08:01.939 14834.967 - 14894.545: 98.8122% ( 7) 00:08:01.939 14894.545 - 14954.124: 98.8600% ( 6) 00:08:01.939 14954.124 - 15013.702: 98.8919% ( 4) 00:08:01.939 15013.702 - 15073.280: 98.9078% ( 2) 00:08:01.939 15073.280 - 15132.858: 98.9318% ( 3) 00:08:01.939 15132.858 - 15192.436: 98.9477% ( 2) 00:08:01.939 15192.436 - 15252.015: 98.9636% ( 2) 00:08:01.939 15252.015 - 15371.171: 98.9796% ( 2) 00:08:01.939 21209.833 - 21328.989: 98.9876% ( 1) 00:08:01.939 21328.989 - 21448.145: 99.0354% ( 6) 00:08:01.939 21448.145 - 21567.302: 99.0673% ( 4) 00:08:01.939 21567.302 - 21686.458: 99.1071% ( 5) 00:08:01.939 21686.458 - 21805.615: 99.1550% ( 6) 00:08:01.939 21805.615 - 21924.771: 99.1789% ( 3) 00:08:01.939 21924.771 - 22043.927: 99.2188% ( 5) 00:08:01.939 22043.927 - 22163.084: 99.2586% ( 5) 00:08:01.939 22163.084 - 22282.240: 99.2905% ( 4) 00:08:01.939 22282.240 - 22401.396: 99.3304% ( 5) 00:08:01.939 22401.396 - 22520.553: 99.3543% ( 3) 00:08:01.939 22520.553 - 22639.709: 99.3941% ( 5) 00:08:01.939 22639.709 - 22758.865: 99.4260% ( 4) 00:08:01.939 22758.865 - 22878.022: 99.4659% ( 5) 00:08:01.939 22878.022 - 22997.178: 99.4898% ( 3) 00:08:01.939 28835.840 - 28954.996: 99.4978% ( 1) 00:08:01.939 28954.996 - 29074.153: 99.5297% ( 4) 00:08:01.939 29074.153 - 29193.309: 99.5695% ( 5) 00:08:01.939 29193.309 - 29312.465: 99.6014% ( 4) 00:08:01.939 29312.465 - 29431.622: 99.6333% ( 4) 00:08:01.939 29431.622 - 29550.778: 99.6572% ( 3) 00:08:01.939 29550.778 - 29669.935: 99.6971% ( 5) 00:08:01.939 29669.935 - 29789.091: 99.7290% ( 4) 00:08:01.939 29789.091 - 29908.247: 99.7688% ( 5) 00:08:01.939 29908.247 - 30027.404: 99.8166% ( 6) 00:08:01.939 30027.404 - 30146.560: 99.8406% ( 3) 00:08:01.939 30146.560 - 30265.716: 99.8724% ( 4) 00:08:01.939 30265.716 - 30384.873: 99.8964% ( 3) 00:08:01.939 30384.873 - 30504.029: 99.9283% ( 4) 00:08:01.939 30504.029 - 30742.342: 100.0000% ( 9) 00:08:01.939 00:08:01.939 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:01.939 ============================================================================== 00:08:01.939 Range in us Cumulative IO count 00:08:01.939 5838.662 - 5868.451: 0.0239% ( 3) 00:08:01.939 5868.451 - 5898.240: 0.0638% ( 5) 00:08:01.939 5898.240 - 5928.029: 0.0717% ( 1) 00:08:01.939 5928.029 - 5957.818: 0.0877% ( 2) 00:08:01.939 5957.818 - 5987.607: 0.0957% ( 1) 00:08:01.939 5987.607 - 6017.396: 0.1116% ( 2) 00:08:01.939 6017.396 - 6047.185: 0.1276% ( 2) 00:08:01.939 6047.185 - 6076.975: 0.1435% ( 2) 00:08:01.939 6076.975 - 6106.764: 0.1594% ( 2) 00:08:01.939 6106.764 - 6136.553: 0.1754% ( 2) 00:08:01.939 6136.553 - 6166.342: 0.1913% ( 2) 00:08:01.939 6166.342 - 6196.131: 0.2073% ( 2) 00:08:01.939 6196.131 - 6225.920: 0.2232% ( 2) 00:08:01.939 6225.920 - 6255.709: 0.2392% ( 2) 00:08:01.939 6255.709 - 6285.498: 0.2551% ( 2) 00:08:01.939 6285.498 - 6315.287: 0.2790% ( 3) 00:08:01.939 6315.287 - 6345.076: 0.2950% ( 2) 00:08:01.939 6345.076 - 6374.865: 0.3109% ( 2) 00:08:01.939 6374.865 - 6404.655: 0.3268% ( 2) 00:08:01.939 6404.655 - 6434.444: 0.3508% ( 3) 00:08:01.939 6434.444 - 6464.233: 0.3667% ( 2) 00:08:01.939 6464.233 - 6494.022: 0.3827% ( 2) 00:08:01.939 6494.022 - 6523.811: 0.3986% ( 2) 00:08:01.939 6523.811 - 6553.600: 0.4225% ( 3) 00:08:01.939 6553.600 - 6583.389: 0.4385% ( 2) 00:08:01.939 6583.389 - 6613.178: 0.4544% ( 2) 00:08:01.939 6613.178 - 6642.967: 0.4703% ( 2) 00:08:01.939 6642.967 - 6672.756: 0.4943% ( 3) 00:08:01.939 6672.756 - 6702.545: 0.5102% ( 2) 00:08:01.939 8340.945 - 8400.524: 0.5660% ( 7) 00:08:01.939 8400.524 - 8460.102: 0.7494% ( 23) 00:08:01.939 8460.102 - 8519.680: 0.8291% ( 10) 00:08:01.939 8519.680 - 8579.258: 1.0364% ( 26) 00:08:01.939 8579.258 - 8638.836: 1.4748% ( 55) 00:08:01.939 8638.836 - 8698.415: 2.3119% ( 105) 00:08:01.939 8698.415 - 8757.993: 3.6033% ( 162) 00:08:01.939 8757.993 - 8817.571: 5.4608% ( 233) 00:08:01.939 8817.571 - 8877.149: 7.8045% ( 294) 00:08:01.939 8877.149 - 8936.727: 10.5469% ( 344) 00:08:01.939 8936.727 - 8996.305: 13.6878% ( 394) 00:08:01.939 8996.305 - 9055.884: 17.0041% ( 416) 00:08:01.939 9055.884 - 9115.462: 20.2726% ( 410) 00:08:01.939 9115.462 - 9175.040: 23.9397% ( 460) 00:08:01.939 9175.040 - 9234.618: 27.3039% ( 422) 00:08:01.939 9234.618 - 9294.196: 30.5804% ( 411) 00:08:01.939 9294.196 - 9353.775: 33.9206% ( 419) 00:08:01.939 9353.775 - 9413.353: 36.9499% ( 380) 00:08:01.939 9413.353 - 9472.931: 40.0909% ( 394) 00:08:01.939 9472.931 - 9532.509: 42.9608% ( 360) 00:08:01.939 9532.509 - 9592.087: 46.0140% ( 383) 00:08:01.939 9592.087 - 9651.665: 49.2905% ( 411) 00:08:01.939 9651.665 - 9711.244: 52.1923% ( 364) 00:08:01.939 9711.244 - 9770.822: 54.7592% ( 322) 00:08:01.939 9770.822 - 9830.400: 56.9117% ( 270) 00:08:01.939 9830.400 - 9889.978: 58.9923% ( 261) 00:08:01.939 9889.978 - 9949.556: 60.9295% ( 243) 00:08:01.939 9949.556 - 10009.135: 62.3007% ( 172) 00:08:01.939 10009.135 - 10068.713: 63.4805% ( 148) 00:08:01.939 10068.713 - 10128.291: 64.6524% ( 147) 00:08:01.939 10128.291 - 10187.869: 65.7286% ( 135) 00:08:01.939 10187.869 - 10247.447: 66.7809% ( 132) 00:08:01.939 10247.447 - 10307.025: 67.8332% ( 132) 00:08:01.939 10307.025 - 10366.604: 69.0848% ( 157) 00:08:01.939 10366.604 - 10426.182: 70.2487% ( 146) 00:08:01.939 10426.182 - 10485.760: 71.4047% ( 145) 00:08:01.939 10485.760 - 10545.338: 72.5686% ( 146) 00:08:01.939 10545.338 - 10604.916: 73.8600% ( 162) 00:08:01.939 10604.916 - 10664.495: 74.8087% ( 119) 00:08:01.939 10664.495 - 10724.073: 75.7892% ( 123) 00:08:01.939 10724.073 - 10783.651: 76.6103% ( 103) 00:08:01.939 10783.651 - 10843.229: 77.4713% ( 108) 00:08:01.939 10843.229 - 10902.807: 78.3004% ( 104) 00:08:01.939 10902.807 - 10962.385: 79.1693% ( 109) 00:08:01.939 10962.385 - 11021.964: 80.0702% ( 113) 00:08:01.939 11021.964 - 11081.542: 80.7956% ( 91) 00:08:01.939 11081.542 - 11141.120: 81.4334% ( 80) 00:08:01.940 11141.120 - 11200.698: 82.1189% ( 86) 00:08:01.940 11200.698 - 11260.276: 82.8284% ( 89) 00:08:01.940 11260.276 - 11319.855: 83.4582% ( 79) 00:08:01.940 11319.855 - 11379.433: 84.1358% ( 85) 00:08:01.940 11379.433 - 11439.011: 84.8294% ( 87) 00:08:01.940 11439.011 - 11498.589: 85.5867% ( 95) 00:08:01.940 11498.589 - 11558.167: 86.3202% ( 92) 00:08:01.940 11558.167 - 11617.745: 87.0695% ( 94) 00:08:01.940 11617.745 - 11677.324: 87.6674% ( 75) 00:08:01.940 11677.324 - 11736.902: 88.4646% ( 100) 00:08:01.940 11736.902 - 11796.480: 89.0625% ( 75) 00:08:01.940 11796.480 - 11856.058: 89.6046% ( 68) 00:08:01.940 11856.058 - 11915.636: 90.0829% ( 60) 00:08:01.940 11915.636 - 11975.215: 90.5612% ( 60) 00:08:01.940 11975.215 - 12034.793: 91.0316% ( 59) 00:08:01.940 12034.793 - 12094.371: 91.4142% ( 48) 00:08:01.940 12094.371 - 12153.949: 91.8766% ( 58) 00:08:01.940 12153.949 - 12213.527: 92.2752% ( 50) 00:08:01.940 12213.527 - 12273.105: 92.6578% ( 48) 00:08:01.940 12273.105 - 12332.684: 93.0006% ( 43) 00:08:01.940 12332.684 - 12392.262: 93.2956% ( 37) 00:08:01.940 12392.262 - 12451.840: 93.5666% ( 34) 00:08:01.940 12451.840 - 12511.418: 93.8058% ( 30) 00:08:01.940 12511.418 - 12570.996: 94.0131% ( 26) 00:08:01.940 12570.996 - 12630.575: 94.2203% ( 26) 00:08:01.940 12630.575 - 12690.153: 94.4436% ( 28) 00:08:01.940 12690.153 - 12749.731: 94.6588% ( 27) 00:08:01.940 12749.731 - 12809.309: 95.0016% ( 43) 00:08:01.940 12809.309 - 12868.887: 95.2009% ( 25) 00:08:01.940 12868.887 - 12928.465: 95.3683% ( 21) 00:08:01.940 12928.465 - 12988.044: 95.5198% ( 19) 00:08:01.940 12988.044 - 13047.622: 95.6314% ( 14) 00:08:01.940 13047.622 - 13107.200: 95.7350% ( 13) 00:08:01.940 13107.200 - 13166.778: 95.8466% ( 14) 00:08:01.940 13166.778 - 13226.356: 95.9742% ( 16) 00:08:01.940 13226.356 - 13285.935: 96.1097% ( 17) 00:08:01.940 13285.935 - 13345.513: 96.2372% ( 16) 00:08:01.940 13345.513 - 13405.091: 96.3568% ( 15) 00:08:01.940 13405.091 - 13464.669: 96.4764% ( 15) 00:08:01.940 13464.669 - 13524.247: 96.6119% ( 17) 00:08:01.940 13524.247 - 13583.825: 96.7554% ( 18) 00:08:01.940 13583.825 - 13643.404: 96.8750% ( 15) 00:08:01.940 13643.404 - 13702.982: 96.9946% ( 15) 00:08:01.940 13702.982 - 13762.560: 97.1381% ( 18) 00:08:01.940 13762.560 - 13822.138: 97.2895% ( 19) 00:08:01.940 13822.138 - 13881.716: 97.4570% ( 21) 00:08:01.940 13881.716 - 13941.295: 97.6323% ( 22) 00:08:01.940 13941.295 - 14000.873: 97.8874% ( 32) 00:08:01.940 14000.873 - 14060.451: 98.0150% ( 16) 00:08:01.940 14060.451 - 14120.029: 98.1186% ( 13) 00:08:01.940 14120.029 - 14179.607: 98.2063% ( 11) 00:08:01.940 14179.607 - 14239.185: 98.2860% ( 10) 00:08:01.940 14239.185 - 14298.764: 98.3737% ( 11) 00:08:01.940 14298.764 - 14358.342: 98.4534% ( 10) 00:08:01.940 14358.342 - 14417.920: 98.5332% ( 10) 00:08:01.940 14417.920 - 14477.498: 98.5890% ( 7) 00:08:01.940 14477.498 - 14537.076: 98.6687% ( 10) 00:08:01.940 14537.076 - 14596.655: 98.7245% ( 7) 00:08:01.940 14596.655 - 14656.233: 98.7803% ( 7) 00:08:01.940 14656.233 - 14715.811: 98.8281% ( 6) 00:08:01.940 14715.811 - 14775.389: 98.8680% ( 5) 00:08:01.940 14775.389 - 14834.967: 98.9078% ( 5) 00:08:01.940 14834.967 - 14894.545: 98.9397% ( 4) 00:08:01.940 14894.545 - 14954.124: 98.9636% ( 3) 00:08:01.940 14954.124 - 15013.702: 98.9796% ( 2) 00:08:01.940 21805.615 - 21924.771: 99.0035% ( 3) 00:08:01.940 21924.771 - 22043.927: 99.0354% ( 4) 00:08:01.940 22043.927 - 22163.084: 99.0673% ( 4) 00:08:01.940 22163.084 - 22282.240: 99.0992% ( 4) 00:08:01.940 22282.240 - 22401.396: 99.1470% ( 6) 00:08:01.940 22401.396 - 22520.553: 99.1789% ( 4) 00:08:01.940 22520.553 - 22639.709: 99.2267% ( 6) 00:08:01.940 22639.709 - 22758.865: 99.2666% ( 5) 00:08:01.940 22758.865 - 22878.022: 99.3064% ( 5) 00:08:01.940 22878.022 - 22997.178: 99.3463% ( 5) 00:08:01.940 22997.178 - 23116.335: 99.3941% ( 6) 00:08:01.940 23116.335 - 23235.491: 99.4420% ( 6) 00:08:01.940 23235.491 - 23354.647: 99.4818% ( 5) 00:08:01.940 23354.647 - 23473.804: 99.4898% ( 1) 00:08:01.940 28597.527 - 28716.684: 99.5137% ( 3) 00:08:01.940 28716.684 - 28835.840: 99.5615% ( 6) 00:08:01.940 28835.840 - 28954.996: 99.6014% ( 5) 00:08:01.940 28954.996 - 29074.153: 99.6492% ( 6) 00:08:01.940 29074.153 - 29193.309: 99.6811% ( 4) 00:08:01.940 29193.309 - 29312.465: 99.7290% ( 6) 00:08:01.940 29312.465 - 29431.622: 99.7688% ( 5) 00:08:01.940 29431.622 - 29550.778: 99.8087% ( 5) 00:08:01.940 29550.778 - 29669.935: 99.8565% ( 6) 00:08:01.940 29669.935 - 29789.091: 99.8964% ( 5) 00:08:01.940 29789.091 - 29908.247: 99.9362% ( 5) 00:08:01.940 29908.247 - 30027.404: 99.9681% ( 4) 00:08:01.940 30027.404 - 30146.560: 100.0000% ( 4) 00:08:01.940 00:08:01.940 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:01.940 ============================================================================== 00:08:01.940 Range in us Cumulative IO count 00:08:01.940 5153.513 - 5183.302: 0.0080% ( 1) 00:08:01.940 5183.302 - 5213.091: 0.0478% ( 5) 00:08:01.940 5213.091 - 5242.880: 0.1196% ( 9) 00:08:01.940 5242.880 - 5272.669: 0.1993% ( 10) 00:08:01.940 5272.669 - 5302.458: 0.2392% ( 5) 00:08:01.940 5302.458 - 5332.247: 0.2471% ( 1) 00:08:01.940 5332.247 - 5362.036: 0.2631% ( 2) 00:08:01.940 5362.036 - 5391.825: 0.2790% ( 2) 00:08:01.940 5391.825 - 5421.615: 0.2950% ( 2) 00:08:01.940 5421.615 - 5451.404: 0.3029% ( 1) 00:08:01.940 5451.404 - 5481.193: 0.3189% ( 2) 00:08:01.940 5481.193 - 5510.982: 0.3268% ( 1) 00:08:01.940 5510.982 - 5540.771: 0.3428% ( 2) 00:08:01.940 5540.771 - 5570.560: 0.3508% ( 1) 00:08:01.940 5570.560 - 5600.349: 0.3667% ( 2) 00:08:01.940 5600.349 - 5630.138: 0.3827% ( 2) 00:08:01.940 5689.716 - 5719.505: 0.3986% ( 2) 00:08:01.940 5719.505 - 5749.295: 0.4145% ( 2) 00:08:01.940 5749.295 - 5779.084: 0.4305% ( 2) 00:08:01.940 5808.873 - 5838.662: 0.4464% ( 2) 00:08:01.940 5838.662 - 5868.451: 0.4624% ( 2) 00:08:01.940 5868.451 - 5898.240: 0.4783% ( 2) 00:08:01.940 5898.240 - 5928.029: 0.4943% ( 2) 00:08:01.940 5928.029 - 5957.818: 0.5102% ( 2) 00:08:01.940 7447.273 - 7477.062: 0.5182% ( 1) 00:08:01.940 7477.062 - 7506.851: 0.5820% ( 8) 00:08:01.940 7506.851 - 7536.640: 0.6298% ( 6) 00:08:01.940 7536.640 - 7566.429: 0.6696% ( 5) 00:08:01.940 7566.429 - 7596.218: 0.7334% ( 8) 00:08:01.940 7596.218 - 7626.007: 0.7653% ( 4) 00:08:01.940 7626.007 - 7685.585: 0.7812% ( 2) 00:08:01.940 7685.585 - 7745.164: 0.8052% ( 3) 00:08:01.940 7745.164 - 7804.742: 0.8371% ( 4) 00:08:01.940 7804.742 - 7864.320: 0.8610% ( 3) 00:08:01.940 7864.320 - 7923.898: 0.8849% ( 3) 00:08:01.940 7923.898 - 7983.476: 0.9008% ( 2) 00:08:01.940 7983.476 - 8043.055: 0.9327% ( 4) 00:08:01.940 8043.055 - 8102.633: 0.9487% ( 2) 00:08:01.940 8102.633 - 8162.211: 0.9726% ( 3) 00:08:01.940 8162.211 - 8221.789: 0.9965% ( 3) 00:08:01.940 8221.789 - 8281.367: 1.0204% ( 3) 00:08:01.940 8340.945 - 8400.524: 1.0364% ( 2) 00:08:01.940 8400.524 - 8460.102: 1.0682% ( 4) 00:08:01.940 8460.102 - 8519.680: 1.1559% ( 11) 00:08:01.940 8519.680 - 8579.258: 1.4349% ( 35) 00:08:01.940 8579.258 - 8638.836: 2.0010% ( 71) 00:08:01.940 8638.836 - 8698.415: 2.8061% ( 101) 00:08:01.940 8698.415 - 8757.993: 4.0657% ( 158) 00:08:01.940 8757.993 - 8817.571: 5.9072% ( 231) 00:08:01.940 8817.571 - 8877.149: 7.9241% ( 253) 00:08:01.940 8877.149 - 8936.727: 10.5947% ( 335) 00:08:01.940 8936.727 - 8996.305: 13.3371% ( 344) 00:08:01.940 8996.305 - 9055.884: 16.4700% ( 393) 00:08:01.940 9055.884 - 9115.462: 19.7704% ( 414) 00:08:01.940 9115.462 - 9175.040: 23.2302% ( 434) 00:08:01.940 9175.040 - 9234.618: 26.5705% ( 419) 00:08:01.940 9234.618 - 9294.196: 29.7114% ( 394) 00:08:01.940 9294.196 - 9353.775: 32.9719% ( 409) 00:08:01.940 9353.775 - 9413.353: 36.2404% ( 410) 00:08:01.940 9413.353 - 9472.931: 39.3495% ( 390) 00:08:01.940 9472.931 - 9532.509: 42.5781% ( 405) 00:08:01.940 9532.509 - 9592.087: 46.0300% ( 433) 00:08:01.940 9592.087 - 9651.665: 49.0912% ( 384) 00:08:01.940 9651.665 - 9711.244: 51.7459% ( 333) 00:08:01.940 9711.244 - 9770.822: 54.3447% ( 326) 00:08:01.940 9770.822 - 9830.400: 56.7921% ( 307) 00:08:01.940 9830.400 - 9889.978: 58.8489% ( 258) 00:08:01.940 9889.978 - 9949.556: 60.5708% ( 216) 00:08:01.940 9949.556 - 10009.135: 62.1094% ( 193) 00:08:01.940 10009.135 - 10068.713: 63.4566% ( 169) 00:08:01.940 10068.713 - 10128.291: 64.7879% ( 167) 00:08:01.940 10128.291 - 10187.869: 66.0794% ( 162) 00:08:01.940 10187.869 - 10247.447: 67.3071% ( 154) 00:08:01.940 10247.447 - 10307.025: 68.6543% ( 169) 00:08:01.940 10307.025 - 10366.604: 69.9139% ( 158) 00:08:01.940 10366.604 - 10426.182: 71.0619% ( 144) 00:08:01.940 10426.182 - 10485.760: 72.2736% ( 152) 00:08:01.940 10485.760 - 10545.338: 73.4853% ( 152) 00:08:01.940 10545.338 - 10604.916: 74.6094% ( 141) 00:08:01.940 10604.916 - 10664.495: 75.7095% ( 138) 00:08:01.940 10664.495 - 10724.073: 76.6901% ( 123) 00:08:01.940 10724.073 - 10783.651: 77.6786% ( 124) 00:08:01.940 10783.651 - 10843.229: 78.5874% ( 114) 00:08:01.940 10843.229 - 10902.807: 79.2809% ( 87) 00:08:01.940 10902.807 - 10962.385: 79.9027% ( 78) 00:08:01.940 10962.385 - 11021.964: 80.5564% ( 82) 00:08:01.940 11021.964 - 11081.542: 81.2341% ( 85) 00:08:01.940 11081.542 - 11141.120: 81.8080% ( 72) 00:08:01.940 11141.120 - 11200.698: 82.3661% ( 70) 00:08:01.940 11200.698 - 11260.276: 82.9241% ( 70) 00:08:01.940 11260.276 - 11319.855: 83.4901% ( 71) 00:08:01.940 11319.855 - 11379.433: 84.1996% ( 89) 00:08:01.940 11379.433 - 11439.011: 84.7816% ( 73) 00:08:01.940 11439.011 - 11498.589: 85.4672% ( 86) 00:08:01.940 11498.589 - 11558.167: 86.2325% ( 96) 00:08:01.940 11558.167 - 11617.745: 87.0934% ( 108) 00:08:01.940 11617.745 - 11677.324: 87.8029% ( 89) 00:08:01.940 11677.324 - 11736.902: 88.3610% ( 70) 00:08:01.941 11736.902 - 11796.480: 89.2140% ( 107) 00:08:01.941 11796.480 - 11856.058: 89.8039% ( 74) 00:08:01.941 11856.058 - 11915.636: 90.3221% ( 65) 00:08:01.941 11915.636 - 11975.215: 90.8004% ( 60) 00:08:01.941 11975.215 - 12034.793: 91.2388% ( 55) 00:08:01.941 12034.793 - 12094.371: 91.6853% ( 56) 00:08:01.941 12094.371 - 12153.949: 92.0121% ( 41) 00:08:01.941 12153.949 - 12213.527: 92.3788% ( 46) 00:08:01.941 12213.527 - 12273.105: 92.7057% ( 41) 00:08:01.941 12273.105 - 12332.684: 93.0086% ( 38) 00:08:01.941 12332.684 - 12392.262: 93.3355% ( 41) 00:08:01.941 12392.262 - 12451.840: 93.6224% ( 36) 00:08:01.941 12451.840 - 12511.418: 93.9015% ( 35) 00:08:01.941 12511.418 - 12570.996: 94.1327% ( 29) 00:08:01.941 12570.996 - 12630.575: 94.3559% ( 28) 00:08:01.941 12630.575 - 12690.153: 94.5392% ( 23) 00:08:01.941 12690.153 - 12749.731: 94.7305% ( 24) 00:08:01.941 12749.731 - 12809.309: 94.9059% ( 22) 00:08:01.941 12809.309 - 12868.887: 95.0733% ( 21) 00:08:01.941 12868.887 - 12928.465: 95.1770% ( 13) 00:08:01.941 12928.465 - 12988.044: 95.3045% ( 16) 00:08:01.941 12988.044 - 13047.622: 95.4082% ( 13) 00:08:01.941 13047.622 - 13107.200: 95.5277% ( 15) 00:08:01.941 13107.200 - 13166.778: 95.6393% ( 14) 00:08:01.941 13166.778 - 13226.356: 95.7988% ( 20) 00:08:01.941 13226.356 - 13285.935: 95.9263% ( 16) 00:08:01.941 13285.935 - 13345.513: 96.1177% ( 24) 00:08:01.941 13345.513 - 13405.091: 96.2930% ( 22) 00:08:01.941 13405.091 - 13464.669: 96.4365% ( 18) 00:08:01.941 13464.669 - 13524.247: 96.6199% ( 23) 00:08:01.941 13524.247 - 13583.825: 96.8192% ( 25) 00:08:01.941 13583.825 - 13643.404: 97.0026% ( 23) 00:08:01.941 13643.404 - 13702.982: 97.1939% ( 24) 00:08:01.941 13702.982 - 13762.560: 97.3772% ( 23) 00:08:01.941 13762.560 - 13822.138: 97.5446% ( 21) 00:08:01.941 13822.138 - 13881.716: 97.6642% ( 15) 00:08:01.941 13881.716 - 13941.295: 97.7519% ( 11) 00:08:01.941 13941.295 - 14000.873: 97.8396% ( 11) 00:08:01.941 14000.873 - 14060.451: 97.9273% ( 11) 00:08:01.941 14060.451 - 14120.029: 98.0070% ( 10) 00:08:01.941 14120.029 - 14179.607: 98.0947% ( 11) 00:08:01.941 14179.607 - 14239.185: 98.1665% ( 9) 00:08:01.941 14239.185 - 14298.764: 98.2541% ( 11) 00:08:01.941 14298.764 - 14358.342: 98.3259% ( 9) 00:08:01.941 14358.342 - 14417.920: 98.4056% ( 10) 00:08:01.941 14417.920 - 14477.498: 98.4853% ( 10) 00:08:01.941 14477.498 - 14537.076: 98.5571% ( 9) 00:08:01.941 14537.076 - 14596.655: 98.6288% ( 9) 00:08:01.941 14596.655 - 14656.233: 98.6846% ( 7) 00:08:01.941 14656.233 - 14715.811: 98.7325% ( 6) 00:08:01.941 14715.811 - 14775.389: 98.7883% ( 7) 00:08:01.941 14775.389 - 14834.967: 98.8361% ( 6) 00:08:01.941 14834.967 - 14894.545: 98.8760% ( 5) 00:08:01.941 14894.545 - 14954.124: 98.9238% ( 6) 00:08:01.941 14954.124 - 15013.702: 98.9557% ( 4) 00:08:01.941 15013.702 - 15073.280: 98.9796% ( 3) 00:08:01.941 21924.771 - 22043.927: 98.9876% ( 1) 00:08:01.941 22043.927 - 22163.084: 99.0195% ( 4) 00:08:01.941 22163.084 - 22282.240: 99.0673% ( 6) 00:08:01.941 22282.240 - 22401.396: 99.0992% ( 4) 00:08:01.941 22401.396 - 22520.553: 99.1470% ( 6) 00:08:01.941 22520.553 - 22639.709: 99.1789% ( 4) 00:08:01.941 22639.709 - 22758.865: 99.2188% ( 5) 00:08:01.941 22758.865 - 22878.022: 99.2506% ( 4) 00:08:01.941 22878.022 - 22997.178: 99.2905% ( 5) 00:08:01.941 22997.178 - 23116.335: 99.3304% ( 5) 00:08:01.941 23116.335 - 23235.491: 99.3702% ( 5) 00:08:01.941 23235.491 - 23354.647: 99.4101% ( 5) 00:08:01.941 23354.647 - 23473.804: 99.4579% ( 6) 00:08:01.941 23473.804 - 23592.960: 99.4898% ( 4) 00:08:01.941 28835.840 - 28954.996: 99.5217% ( 4) 00:08:01.941 28954.996 - 29074.153: 99.5536% ( 4) 00:08:01.941 29074.153 - 29193.309: 99.5855% ( 4) 00:08:01.941 29193.309 - 29312.465: 99.6413% ( 7) 00:08:01.941 29312.465 - 29431.622: 99.6732% ( 4) 00:08:01.941 29431.622 - 29550.778: 99.7130% ( 5) 00:08:01.941 29550.778 - 29669.935: 99.7529% ( 5) 00:08:01.941 29669.935 - 29789.091: 99.7848% ( 4) 00:08:01.941 29789.091 - 29908.247: 99.8246% ( 5) 00:08:01.941 29908.247 - 30027.404: 99.8565% ( 4) 00:08:01.941 30027.404 - 30146.560: 99.8884% ( 4) 00:08:01.941 30146.560 - 30265.716: 99.9283% ( 5) 00:08:01.941 30265.716 - 30384.873: 99.9761% ( 6) 00:08:01.941 30384.873 - 30504.029: 100.0000% ( 3) 00:08:01.941 00:08:01.941 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:01.941 ============================================================================== 00:08:01.941 Range in us Cumulative IO count 00:08:01.941 4766.255 - 4796.044: 0.0159% ( 2) 00:08:01.941 4796.044 - 4825.833: 0.0638% ( 6) 00:08:01.941 4825.833 - 4855.622: 0.1116% ( 6) 00:08:01.941 4855.622 - 4885.411: 0.1834% ( 9) 00:08:01.941 4885.411 - 4915.200: 0.2232% ( 5) 00:08:01.941 4915.200 - 4944.989: 0.2392% ( 2) 00:08:01.941 4944.989 - 4974.778: 0.2471% ( 1) 00:08:01.941 4974.778 - 5004.567: 0.2631% ( 2) 00:08:01.941 5004.567 - 5034.356: 0.2710% ( 1) 00:08:01.941 5034.356 - 5064.145: 0.2870% ( 2) 00:08:01.941 5064.145 - 5093.935: 0.2950% ( 1) 00:08:01.941 5093.935 - 5123.724: 0.3029% ( 1) 00:08:01.941 5123.724 - 5153.513: 0.3189% ( 2) 00:08:01.941 5153.513 - 5183.302: 0.3348% ( 2) 00:08:01.941 5183.302 - 5213.091: 0.3508% ( 2) 00:08:01.941 5213.091 - 5242.880: 0.3587% ( 1) 00:08:01.941 5242.880 - 5272.669: 0.3747% ( 2) 00:08:01.941 5272.669 - 5302.458: 0.3906% ( 2) 00:08:01.941 5302.458 - 5332.247: 0.4066% ( 2) 00:08:01.941 5332.247 - 5362.036: 0.4145% ( 1) 00:08:01.941 5362.036 - 5391.825: 0.4305% ( 2) 00:08:01.941 5391.825 - 5421.615: 0.4464% ( 2) 00:08:01.941 5421.615 - 5451.404: 0.4624% ( 2) 00:08:01.941 5451.404 - 5481.193: 0.4863% ( 3) 00:08:01.941 5481.193 - 5510.982: 0.5022% ( 2) 00:08:01.941 5510.982 - 5540.771: 0.5102% ( 1) 00:08:01.941 7208.960 - 7238.749: 0.5660% ( 7) 00:08:01.941 7238.749 - 7268.538: 0.6298% ( 8) 00:08:01.941 7268.538 - 7298.327: 0.6537% ( 3) 00:08:01.941 7298.327 - 7328.116: 0.7095% ( 7) 00:08:01.941 7328.116 - 7357.905: 0.7254% ( 2) 00:08:01.941 7357.905 - 7387.695: 0.7414% ( 2) 00:08:01.941 7387.695 - 7417.484: 0.7573% ( 2) 00:08:01.941 7417.484 - 7447.273: 0.7653% ( 1) 00:08:01.941 7447.273 - 7477.062: 0.7812% ( 2) 00:08:01.941 7477.062 - 7506.851: 0.7972% ( 2) 00:08:01.941 7506.851 - 7536.640: 0.8131% ( 2) 00:08:01.941 7536.640 - 7566.429: 0.8291% ( 2) 00:08:01.941 7566.429 - 7596.218: 0.8371% ( 1) 00:08:01.941 7596.218 - 7626.007: 0.8530% ( 2) 00:08:01.941 7626.007 - 7685.585: 0.8769% ( 3) 00:08:01.941 7685.585 - 7745.164: 0.9008% ( 3) 00:08:01.941 7745.164 - 7804.742: 0.9327% ( 4) 00:08:01.941 7804.742 - 7864.320: 0.9726% ( 5) 00:08:01.941 7864.320 - 7923.898: 1.0045% ( 4) 00:08:01.941 7923.898 - 7983.476: 1.0204% ( 2) 00:08:01.941 8340.945 - 8400.524: 1.0284% ( 1) 00:08:01.941 8400.524 - 8460.102: 1.0762% ( 6) 00:08:01.941 8460.102 - 8519.680: 1.1798% ( 13) 00:08:01.941 8519.680 - 8579.258: 1.4190% ( 30) 00:08:01.941 8579.258 - 8638.836: 1.9133% ( 62) 00:08:01.941 8638.836 - 8698.415: 2.8699% ( 120) 00:08:01.941 8698.415 - 8757.993: 3.9302% ( 133) 00:08:01.941 8757.993 - 8817.571: 5.5325% ( 201) 00:08:01.941 8817.571 - 8877.149: 7.6052% ( 260) 00:08:01.941 8877.149 - 8936.727: 10.3795% ( 348) 00:08:01.941 8936.727 - 8996.305: 13.4247% ( 382) 00:08:01.941 8996.305 - 9055.884: 16.8846% ( 434) 00:08:01.941 9055.884 - 9115.462: 20.4799% ( 451) 00:08:01.941 9115.462 - 9175.040: 23.9796% ( 439) 00:08:01.941 9175.040 - 9234.618: 27.2959% ( 416) 00:08:01.941 9234.618 - 9294.196: 30.5405% ( 407) 00:08:01.941 9294.196 - 9353.775: 33.4184% ( 361) 00:08:01.941 9353.775 - 9413.353: 36.5115% ( 388) 00:08:01.941 9413.353 - 9472.931: 39.6126% ( 389) 00:08:01.941 9472.931 - 9532.509: 42.5143% ( 364) 00:08:01.941 9532.509 - 9592.087: 45.5995% ( 387) 00:08:01.941 9592.087 - 9651.665: 48.4853% ( 362) 00:08:01.941 9651.665 - 9711.244: 51.5944% ( 390) 00:08:01.941 9711.244 - 9770.822: 54.1614% ( 322) 00:08:01.941 9770.822 - 9830.400: 56.4892% ( 292) 00:08:01.941 9830.400 - 9889.978: 58.4024% ( 240) 00:08:01.941 9889.978 - 9949.556: 60.0207% ( 203) 00:08:01.941 9949.556 - 10009.135: 61.5513% ( 192) 00:08:01.941 10009.135 - 10068.713: 63.0580% ( 189) 00:08:01.941 10068.713 - 10128.291: 64.6524% ( 200) 00:08:01.941 10128.291 - 10187.869: 66.0077% ( 170) 00:08:01.941 10187.869 - 10247.447: 67.3948% ( 174) 00:08:01.941 10247.447 - 10307.025: 68.7659% ( 172) 00:08:01.941 10307.025 - 10366.604: 70.2248% ( 183) 00:08:01.941 10366.604 - 10426.182: 71.5880% ( 171) 00:08:01.941 10426.182 - 10485.760: 72.8077% ( 153) 00:08:01.941 10485.760 - 10545.338: 73.8520% ( 131) 00:08:01.941 10545.338 - 10604.916: 75.1515% ( 163) 00:08:01.941 10604.916 - 10664.495: 76.1320% ( 123) 00:08:01.941 10664.495 - 10724.073: 76.9770% ( 106) 00:08:01.941 10724.073 - 10783.651: 77.8221% ( 106) 00:08:01.941 10783.651 - 10843.229: 78.5953% ( 97) 00:08:01.941 10843.229 - 10902.807: 79.3367% ( 93) 00:08:01.941 10902.807 - 10962.385: 79.9984% ( 83) 00:08:01.941 10962.385 - 11021.964: 80.5325% ( 67) 00:08:01.941 11021.964 - 11081.542: 81.1065% ( 72) 00:08:01.941 11081.542 - 11141.120: 81.7203% ( 77) 00:08:01.941 11141.120 - 11200.698: 82.3023% ( 73) 00:08:01.941 11200.698 - 11260.276: 82.9241% ( 78) 00:08:01.941 11260.276 - 11319.855: 83.5539% ( 79) 00:08:01.941 11319.855 - 11379.433: 84.3272% ( 97) 00:08:01.941 11379.433 - 11439.011: 84.9570% ( 79) 00:08:01.941 11439.011 - 11498.589: 85.6983% ( 93) 00:08:01.941 11498.589 - 11558.167: 86.3600% ( 83) 00:08:01.941 11558.167 - 11617.745: 87.0297% ( 84) 00:08:01.941 11617.745 - 11677.324: 87.6993% ( 84) 00:08:01.942 11677.324 - 11736.902: 88.4965% ( 100) 00:08:01.942 11736.902 - 11796.480: 89.1741% ( 85) 00:08:01.942 11796.480 - 11856.058: 89.7242% ( 69) 00:08:01.942 11856.058 - 11915.636: 90.2982% ( 72) 00:08:01.942 11915.636 - 11975.215: 90.8482% ( 69) 00:08:01.942 11975.215 - 12034.793: 91.3823% ( 67) 00:08:01.942 12034.793 - 12094.371: 91.8208% ( 55) 00:08:01.942 12094.371 - 12153.949: 92.2911% ( 59) 00:08:01.942 12153.949 - 12213.527: 92.6897% ( 50) 00:08:01.942 12213.527 - 12273.105: 93.0724% ( 48) 00:08:01.942 12273.105 - 12332.684: 93.3594% ( 36) 00:08:01.942 12332.684 - 12392.262: 93.5985% ( 30) 00:08:01.942 12392.262 - 12451.840: 93.8536% ( 32) 00:08:01.942 12451.840 - 12511.418: 94.0928% ( 30) 00:08:01.942 12511.418 - 12570.996: 94.2761% ( 23) 00:08:01.942 12570.996 - 12630.575: 94.4675% ( 24) 00:08:01.942 12630.575 - 12690.153: 94.6429% ( 22) 00:08:01.942 12690.153 - 12749.731: 94.7784% ( 17) 00:08:01.942 12749.731 - 12809.309: 94.8820% ( 13) 00:08:01.942 12809.309 - 12868.887: 95.0175% ( 17) 00:08:01.942 12868.887 - 12928.465: 95.1291% ( 14) 00:08:01.942 12928.465 - 12988.044: 95.2408% ( 14) 00:08:01.942 12988.044 - 13047.622: 95.3444% ( 13) 00:08:01.942 13047.622 - 13107.200: 95.4321% ( 11) 00:08:01.942 13107.200 - 13166.778: 95.5277% ( 12) 00:08:01.942 13166.778 - 13226.356: 95.6712% ( 18) 00:08:01.942 13226.356 - 13285.935: 95.8307% ( 20) 00:08:01.942 13285.935 - 13345.513: 95.9981% ( 21) 00:08:01.942 13345.513 - 13405.091: 96.1496% ( 19) 00:08:01.942 13405.091 - 13464.669: 96.2771% ( 16) 00:08:01.942 13464.669 - 13524.247: 96.4126% ( 17) 00:08:01.942 13524.247 - 13583.825: 96.6438% ( 29) 00:08:01.942 13583.825 - 13643.404: 96.8272% ( 23) 00:08:01.942 13643.404 - 13702.982: 96.9627% ( 17) 00:08:01.942 13702.982 - 13762.560: 97.1062% ( 18) 00:08:01.942 13762.560 - 13822.138: 97.2337% ( 16) 00:08:01.942 13822.138 - 13881.716: 97.3852% ( 19) 00:08:01.942 13881.716 - 13941.295: 97.5446% ( 20) 00:08:01.942 13941.295 - 14000.873: 97.7519% ( 26) 00:08:01.942 14000.873 - 14060.451: 97.8795% ( 16) 00:08:01.942 14060.451 - 14120.029: 97.9990% ( 15) 00:08:01.942 14120.029 - 14179.607: 98.1107% ( 14) 00:08:01.942 14179.607 - 14239.185: 98.2223% ( 14) 00:08:01.942 14239.185 - 14298.764: 98.3578% ( 17) 00:08:01.942 14298.764 - 14358.342: 98.4455% ( 11) 00:08:01.942 14358.342 - 14417.920: 98.5252% ( 10) 00:08:01.942 14417.920 - 14477.498: 98.5969% ( 9) 00:08:01.942 14477.498 - 14537.076: 98.6607% ( 8) 00:08:01.942 14537.076 - 14596.655: 98.7085% ( 6) 00:08:01.942 14596.655 - 14656.233: 98.7484% ( 5) 00:08:01.942 14656.233 - 14715.811: 98.7962% ( 6) 00:08:01.942 14715.811 - 14775.389: 98.8361% ( 5) 00:08:01.942 14775.389 - 14834.967: 98.8760% ( 5) 00:08:01.942 14834.967 - 14894.545: 98.9078% ( 4) 00:08:01.942 14894.545 - 14954.124: 98.9318% ( 3) 00:08:01.942 14954.124 - 15013.702: 98.9477% ( 2) 00:08:01.942 15013.702 - 15073.280: 98.9716% ( 3) 00:08:01.942 15073.280 - 15132.858: 98.9796% ( 1) 00:08:01.942 21686.458 - 21805.615: 99.0195% ( 5) 00:08:01.942 21805.615 - 21924.771: 99.0753% ( 7) 00:08:01.942 21924.771 - 22043.927: 99.1151% ( 5) 00:08:01.942 22043.927 - 22163.084: 99.1629% ( 6) 00:08:01.942 22163.084 - 22282.240: 99.1948% ( 4) 00:08:01.942 22282.240 - 22401.396: 99.2347% ( 5) 00:08:01.942 22401.396 - 22520.553: 99.2746% ( 5) 00:08:01.942 22520.553 - 22639.709: 99.3144% ( 5) 00:08:01.942 22639.709 - 22758.865: 99.3622% ( 6) 00:08:01.942 22758.865 - 22878.022: 99.4021% ( 5) 00:08:01.942 22878.022 - 22997.178: 99.4499% ( 6) 00:08:01.942 22997.178 - 23116.335: 99.4818% ( 4) 00:08:01.942 23116.335 - 23235.491: 99.4898% ( 1) 00:08:01.942 28359.215 - 28478.371: 99.5217% ( 4) 00:08:01.942 28478.371 - 28597.527: 99.5695% ( 6) 00:08:01.942 28597.527 - 28716.684: 99.6173% ( 6) 00:08:01.942 28716.684 - 28835.840: 99.6572% ( 5) 00:08:01.942 28835.840 - 28954.996: 99.6971% ( 5) 00:08:01.942 28954.996 - 29074.153: 99.7290% ( 4) 00:08:01.942 29074.153 - 29193.309: 99.7688% ( 5) 00:08:01.942 29193.309 - 29312.465: 99.8087% ( 5) 00:08:01.942 29312.465 - 29431.622: 99.8565% ( 6) 00:08:01.942 29431.622 - 29550.778: 99.8884% ( 4) 00:08:01.942 29550.778 - 29669.935: 99.9283% ( 5) 00:08:01.942 29669.935 - 29789.091: 99.9681% ( 5) 00:08:01.942 29789.091 - 29908.247: 100.0000% ( 4) 00:08:01.942 00:08:01.942 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:01.942 ============================================================================== 00:08:01.942 Range in us Cumulative IO count 00:08:01.942 4378.996 - 4408.785: 0.0080% ( 1) 00:08:01.942 4408.785 - 4438.575: 0.1355% ( 16) 00:08:01.942 4438.575 - 4468.364: 0.1754% ( 5) 00:08:01.942 4468.364 - 4498.153: 0.1913% ( 2) 00:08:01.942 4498.153 - 4527.942: 0.1993% ( 1) 00:08:01.942 4527.942 - 4557.731: 0.2152% ( 2) 00:08:01.942 4557.731 - 4587.520: 0.2232% ( 1) 00:08:01.942 4587.520 - 4617.309: 0.2392% ( 2) 00:08:01.942 4617.309 - 4647.098: 0.2471% ( 1) 00:08:01.942 4647.098 - 4676.887: 0.2631% ( 2) 00:08:01.942 4676.887 - 4706.676: 0.2790% ( 2) 00:08:01.942 4706.676 - 4736.465: 0.2870% ( 1) 00:08:01.942 4736.465 - 4766.255: 0.3029% ( 2) 00:08:01.942 4766.255 - 4796.044: 0.3109% ( 1) 00:08:01.942 4796.044 - 4825.833: 0.3268% ( 2) 00:08:01.942 4825.833 - 4855.622: 0.3348% ( 1) 00:08:01.942 4855.622 - 4885.411: 0.3508% ( 2) 00:08:01.942 4885.411 - 4915.200: 0.3667% ( 2) 00:08:01.942 4915.200 - 4944.989: 0.3827% ( 2) 00:08:01.942 4944.989 - 4974.778: 0.3986% ( 2) 00:08:01.942 4974.778 - 5004.567: 0.4145% ( 2) 00:08:01.942 5004.567 - 5034.356: 0.4305% ( 2) 00:08:01.942 5034.356 - 5064.145: 0.4464% ( 2) 00:08:01.942 5064.145 - 5093.935: 0.4624% ( 2) 00:08:01.942 5093.935 - 5123.724: 0.4783% ( 2) 00:08:01.942 5123.724 - 5153.513: 0.5022% ( 3) 00:08:01.942 5153.513 - 5183.302: 0.5102% ( 1) 00:08:01.942 6821.702 - 6851.491: 0.5182% ( 1) 00:08:01.942 6940.858 - 6970.647: 0.5341% ( 2) 00:08:01.942 6970.647 - 7000.436: 0.5820% ( 6) 00:08:01.942 7000.436 - 7030.225: 0.6457% ( 8) 00:08:01.942 7030.225 - 7060.015: 0.7095% ( 8) 00:08:01.942 7060.015 - 7089.804: 0.7175% ( 1) 00:08:01.942 7089.804 - 7119.593: 0.7254% ( 1) 00:08:01.942 7119.593 - 7149.382: 0.7414% ( 2) 00:08:01.942 7149.382 - 7179.171: 0.7494% ( 1) 00:08:01.942 7179.171 - 7208.960: 0.7653% ( 2) 00:08:01.942 7208.960 - 7238.749: 0.7733% ( 1) 00:08:01.942 7238.749 - 7268.538: 0.7892% ( 2) 00:08:01.942 7268.538 - 7298.327: 0.8052% ( 2) 00:08:01.942 7298.327 - 7328.116: 0.8211% ( 2) 00:08:01.942 7328.116 - 7357.905: 0.8371% ( 2) 00:08:01.942 7357.905 - 7387.695: 0.8530% ( 2) 00:08:01.942 7387.695 - 7417.484: 0.8689% ( 2) 00:08:01.942 7417.484 - 7447.273: 0.8769% ( 1) 00:08:01.942 7447.273 - 7477.062: 0.8929% ( 2) 00:08:01.942 7477.062 - 7506.851: 0.9088% ( 2) 00:08:01.942 7506.851 - 7536.640: 0.9247% ( 2) 00:08:01.942 7536.640 - 7566.429: 0.9407% ( 2) 00:08:01.942 7566.429 - 7596.218: 0.9566% ( 2) 00:08:01.942 7596.218 - 7626.007: 0.9805% ( 3) 00:08:01.942 7626.007 - 7685.585: 1.0124% ( 4) 00:08:01.942 7685.585 - 7745.164: 1.0204% ( 1) 00:08:01.942 8340.945 - 8400.524: 1.0284% ( 1) 00:08:01.942 8460.102 - 8519.680: 1.1240% ( 12) 00:08:01.942 8519.680 - 8579.258: 1.3552% ( 29) 00:08:01.942 8579.258 - 8638.836: 1.7219% ( 46) 00:08:01.942 8638.836 - 8698.415: 2.5351% ( 102) 00:08:01.942 8698.415 - 8757.993: 3.8744% ( 168) 00:08:01.942 8757.993 - 8817.571: 5.7159% ( 231) 00:08:01.942 8817.571 - 8877.149: 7.8125% ( 263) 00:08:01.942 8877.149 - 8936.727: 10.6266% ( 353) 00:08:01.942 8936.727 - 8996.305: 13.6001% ( 373) 00:08:01.942 8996.305 - 9055.884: 16.9404% ( 419) 00:08:01.942 9055.884 - 9115.462: 20.5995% ( 459) 00:08:01.942 9115.462 - 9175.040: 24.0035% ( 427) 00:08:01.942 9175.040 - 9234.618: 27.3916% ( 425) 00:08:01.942 9234.618 - 9294.196: 30.4847% ( 388) 00:08:01.942 9294.196 - 9353.775: 33.4503% ( 372) 00:08:01.942 9353.775 - 9413.353: 36.4557% ( 377) 00:08:01.942 9413.353 - 9472.931: 39.4292% ( 373) 00:08:01.942 9472.931 - 9532.509: 42.4665% ( 381) 00:08:01.942 9532.509 - 9592.087: 45.5517% ( 387) 00:08:01.942 9592.087 - 9651.665: 48.8520% ( 414) 00:08:01.942 9651.665 - 9711.244: 51.8973% ( 382) 00:08:01.942 9711.244 - 9770.822: 54.6078% ( 340) 00:08:01.942 9770.822 - 9830.400: 56.8160% ( 277) 00:08:01.942 9830.400 - 9889.978: 58.8090% ( 250) 00:08:01.942 9889.978 - 9949.556: 60.5867% ( 223) 00:08:01.942 9949.556 - 10009.135: 62.3645% ( 223) 00:08:01.942 10009.135 - 10068.713: 64.0147% ( 207) 00:08:01.942 10068.713 - 10128.291: 65.5054% ( 187) 00:08:01.943 10128.291 - 10187.869: 66.7172% ( 152) 00:08:01.943 10187.869 - 10247.447: 68.0405% ( 166) 00:08:01.943 10247.447 - 10307.025: 69.2921% ( 157) 00:08:01.943 10307.025 - 10366.604: 70.4879% ( 150) 00:08:01.943 10366.604 - 10426.182: 71.9069% ( 178) 00:08:01.943 10426.182 - 10485.760: 73.0708% ( 146) 00:08:01.943 10485.760 - 10545.338: 74.1390% ( 134) 00:08:01.943 10545.338 - 10604.916: 75.0957% ( 120) 00:08:01.943 10604.916 - 10664.495: 76.1878% ( 137) 00:08:01.943 10664.495 - 10724.073: 77.0328% ( 106) 00:08:01.943 10724.073 - 10783.651: 77.8619% ( 104) 00:08:01.943 10783.651 - 10843.229: 78.5236% ( 83) 00:08:01.943 10843.229 - 10902.807: 79.2092% ( 86) 00:08:01.943 10902.807 - 10962.385: 79.8469% ( 80) 00:08:01.943 10962.385 - 11021.964: 80.4289% ( 73) 00:08:01.943 11021.964 - 11081.542: 80.9790% ( 69) 00:08:01.943 11081.542 - 11141.120: 81.6327% ( 82) 00:08:01.943 11141.120 - 11200.698: 82.4777% ( 106) 00:08:01.943 11200.698 - 11260.276: 83.2350% ( 95) 00:08:01.943 11260.276 - 11319.855: 83.9844% ( 94) 00:08:01.943 11319.855 - 11379.433: 84.7337% ( 94) 00:08:01.943 11379.433 - 11439.011: 85.4034% ( 84) 00:08:01.943 11439.011 - 11498.589: 86.0810% ( 85) 00:08:01.943 11498.589 - 11558.167: 86.6390% ( 70) 00:08:01.943 11558.167 - 11617.745: 87.2768% ( 80) 00:08:01.943 11617.745 - 11677.324: 87.7551% ( 60) 00:08:01.943 11677.324 - 11736.902: 88.3530% ( 75) 00:08:01.943 11736.902 - 11796.480: 88.9110% ( 70) 00:08:01.943 11796.480 - 11856.058: 89.4292% ( 65) 00:08:01.943 11856.058 - 11915.636: 89.9713% ( 68) 00:08:01.943 11915.636 - 11975.215: 90.4815% ( 64) 00:08:01.943 11975.215 - 12034.793: 90.9359% ( 57) 00:08:01.943 12034.793 - 12094.371: 91.3983% ( 58) 00:08:01.943 12094.371 - 12153.949: 91.7969% ( 50) 00:08:01.943 12153.949 - 12213.527: 92.2513% ( 57) 00:08:01.943 12213.527 - 12273.105: 92.6499% ( 50) 00:08:01.943 12273.105 - 12332.684: 93.0804% ( 54) 00:08:01.943 12332.684 - 12392.262: 93.3913% ( 39) 00:08:01.943 12392.262 - 12451.840: 93.7101% ( 40) 00:08:01.943 12451.840 - 12511.418: 93.9892% ( 35) 00:08:01.943 12511.418 - 12570.996: 94.2124% ( 28) 00:08:01.943 12570.996 - 12630.575: 94.3957% ( 23) 00:08:01.943 12630.575 - 12690.153: 94.5552% ( 20) 00:08:01.943 12690.153 - 12749.731: 94.6827% ( 16) 00:08:01.943 12749.731 - 12809.309: 94.8103% ( 16) 00:08:01.943 12809.309 - 12868.887: 94.9298% ( 15) 00:08:01.943 12868.887 - 12928.465: 95.0654% ( 17) 00:08:01.943 12928.465 - 12988.044: 95.2487% ( 23) 00:08:01.943 12988.044 - 13047.622: 95.3763% ( 16) 00:08:01.943 13047.622 - 13107.200: 95.5118% ( 17) 00:08:01.943 13107.200 - 13166.778: 95.6314% ( 15) 00:08:01.943 13166.778 - 13226.356: 95.7350% ( 13) 00:08:01.943 13226.356 - 13285.935: 95.8147% ( 10) 00:08:01.943 13285.935 - 13345.513: 95.8705% ( 7) 00:08:01.943 13345.513 - 13405.091: 95.9343% ( 8) 00:08:01.943 13405.091 - 13464.669: 96.0938% ( 20) 00:08:01.943 13464.669 - 13524.247: 96.2851% ( 24) 00:08:01.943 13524.247 - 13583.825: 96.5163% ( 29) 00:08:01.943 13583.825 - 13643.404: 96.6518% ( 17) 00:08:01.943 13643.404 - 13702.982: 96.8192% ( 21) 00:08:01.943 13702.982 - 13762.560: 96.9547% ( 17) 00:08:01.943 13762.560 - 13822.138: 97.1221% ( 21) 00:08:01.943 13822.138 - 13881.716: 97.3214% ( 25) 00:08:01.943 13881.716 - 13941.295: 97.4888% ( 21) 00:08:01.943 13941.295 - 14000.873: 97.6722% ( 23) 00:08:01.943 14000.873 - 14060.451: 97.8316% ( 20) 00:08:01.943 14060.451 - 14120.029: 97.9990% ( 21) 00:08:01.943 14120.029 - 14179.607: 98.1107% ( 14) 00:08:01.943 14179.607 - 14239.185: 98.2063% ( 12) 00:08:01.943 14239.185 - 14298.764: 98.3020% ( 12) 00:08:01.943 14298.764 - 14358.342: 98.4056% ( 13) 00:08:01.943 14358.342 - 14417.920: 98.4774% ( 9) 00:08:01.943 14417.920 - 14477.498: 98.5491% ( 9) 00:08:01.943 14477.498 - 14537.076: 98.6049% ( 7) 00:08:01.943 14537.076 - 14596.655: 98.6607% ( 7) 00:08:01.943 14596.655 - 14656.233: 98.7165% ( 7) 00:08:01.943 14656.233 - 14715.811: 98.7643% ( 6) 00:08:01.943 14715.811 - 14775.389: 98.7962% ( 4) 00:08:01.943 14775.389 - 14834.967: 98.8361% ( 5) 00:08:01.943 14834.967 - 14894.545: 98.8760% ( 5) 00:08:01.943 14894.545 - 14954.124: 98.9078% ( 4) 00:08:01.943 14954.124 - 15013.702: 98.9397% ( 4) 00:08:01.943 15013.702 - 15073.280: 98.9716% ( 4) 00:08:01.943 15073.280 - 15132.858: 98.9796% ( 1) 00:08:01.943 21328.989 - 21448.145: 99.0274% ( 6) 00:08:01.943 21448.145 - 21567.302: 99.0753% ( 6) 00:08:01.943 21567.302 - 21686.458: 99.1151% ( 5) 00:08:01.943 21686.458 - 21805.615: 99.1709% ( 7) 00:08:01.943 21805.615 - 21924.771: 99.2427% ( 9) 00:08:01.943 21924.771 - 22043.927: 99.2746% ( 4) 00:08:01.943 22043.927 - 22163.084: 99.2985% ( 3) 00:08:01.943 22163.084 - 22282.240: 99.3304% ( 4) 00:08:01.943 22282.240 - 22401.396: 99.3702% ( 5) 00:08:01.943 22401.396 - 22520.553: 99.4101% ( 5) 00:08:01.943 22520.553 - 22639.709: 99.4579% ( 6) 00:08:01.943 22639.709 - 22758.865: 99.4898% ( 4) 00:08:01.943 27882.589 - 28001.745: 99.5057% ( 2) 00:08:01.943 28001.745 - 28120.902: 99.5456% ( 5) 00:08:01.943 28120.902 - 28240.058: 99.5934% ( 6) 00:08:01.943 28240.058 - 28359.215: 99.6413% ( 6) 00:08:01.943 28359.215 - 28478.371: 99.6811% ( 5) 00:08:01.943 28478.371 - 28597.527: 99.7210% ( 5) 00:08:01.943 28597.527 - 28716.684: 99.7688% ( 6) 00:08:01.943 28716.684 - 28835.840: 99.8007% ( 4) 00:08:01.943 28835.840 - 28954.996: 99.8406% ( 5) 00:08:01.943 28954.996 - 29074.153: 99.8804% ( 5) 00:08:01.943 29074.153 - 29193.309: 99.9283% ( 6) 00:08:01.943 29193.309 - 29312.465: 99.9761% ( 6) 00:08:01.943 29312.465 - 29431.622: 100.0000% ( 3) 00:08:01.943 00:08:01.943 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:01.943 ============================================================================== 00:08:01.943 Range in us Cumulative IO count 00:08:01.943 4081.105 - 4110.895: 0.0319% ( 4) 00:08:01.943 4110.895 - 4140.684: 0.0478% ( 2) 00:08:01.943 4140.684 - 4170.473: 0.1515% ( 13) 00:08:01.943 4170.473 - 4200.262: 0.1754% ( 3) 00:08:01.943 4200.262 - 4230.051: 0.1913% ( 2) 00:08:01.943 4230.051 - 4259.840: 0.1993% ( 1) 00:08:01.943 4259.840 - 4289.629: 0.2152% ( 2) 00:08:01.943 4289.629 - 4319.418: 0.2232% ( 1) 00:08:01.943 4319.418 - 4349.207: 0.2392% ( 2) 00:08:01.943 4349.207 - 4378.996: 0.2551% ( 2) 00:08:01.943 4378.996 - 4408.785: 0.2710% ( 2) 00:08:01.943 4408.785 - 4438.575: 0.2870% ( 2) 00:08:01.943 4438.575 - 4468.364: 0.3029% ( 2) 00:08:01.943 4468.364 - 4498.153: 0.3189% ( 2) 00:08:01.943 4498.153 - 4527.942: 0.3268% ( 1) 00:08:01.943 4527.942 - 4557.731: 0.3428% ( 2) 00:08:01.943 4557.731 - 4587.520: 0.3587% ( 2) 00:08:01.943 4587.520 - 4617.309: 0.3747% ( 2) 00:08:01.943 4617.309 - 4647.098: 0.3906% ( 2) 00:08:01.943 4647.098 - 4676.887: 0.4145% ( 3) 00:08:01.943 4676.887 - 4706.676: 0.4305% ( 2) 00:08:01.943 4706.676 - 4736.465: 0.4464% ( 2) 00:08:01.943 4736.465 - 4766.255: 0.4624% ( 2) 00:08:01.943 4766.255 - 4796.044: 0.4783% ( 2) 00:08:01.943 4796.044 - 4825.833: 0.4943% ( 2) 00:08:01.943 4825.833 - 4855.622: 0.5022% ( 1) 00:08:01.943 4855.622 - 4885.411: 0.5102% ( 1) 00:08:01.943 6553.600 - 6583.389: 0.5182% ( 1) 00:08:01.943 6672.756 - 6702.545: 0.5501% ( 4) 00:08:01.943 6702.545 - 6732.335: 0.5899% ( 5) 00:08:01.943 6732.335 - 6762.124: 0.6218% ( 4) 00:08:01.943 6762.124 - 6791.913: 0.7095% ( 11) 00:08:01.943 6791.913 - 6821.702: 0.7175% ( 1) 00:08:01.943 6821.702 - 6851.491: 0.7254% ( 1) 00:08:01.943 6851.491 - 6881.280: 0.7334% ( 1) 00:08:01.943 6881.280 - 6911.069: 0.7494% ( 2) 00:08:01.943 6911.069 - 6940.858: 0.7653% ( 2) 00:08:01.943 6940.858 - 6970.647: 0.7812% ( 2) 00:08:01.943 6970.647 - 7000.436: 0.7972% ( 2) 00:08:01.943 7000.436 - 7030.225: 0.8131% ( 2) 00:08:01.943 7030.225 - 7060.015: 0.8291% ( 2) 00:08:01.943 7060.015 - 7089.804: 0.8371% ( 1) 00:08:01.943 7089.804 - 7119.593: 0.8530% ( 2) 00:08:01.943 7119.593 - 7149.382: 0.8689% ( 2) 00:08:01.943 7149.382 - 7179.171: 0.8849% ( 2) 00:08:01.943 7179.171 - 7208.960: 0.8929% ( 1) 00:08:01.943 7208.960 - 7238.749: 0.9088% ( 2) 00:08:01.943 7238.749 - 7268.538: 0.9247% ( 2) 00:08:01.943 7268.538 - 7298.327: 0.9487% ( 3) 00:08:01.943 7298.327 - 7328.116: 0.9646% ( 2) 00:08:01.943 7328.116 - 7357.905: 0.9805% ( 2) 00:08:01.943 7357.905 - 7387.695: 0.9965% ( 2) 00:08:01.943 7387.695 - 7417.484: 1.0124% ( 2) 00:08:01.943 7417.484 - 7447.273: 1.0204% ( 1) 00:08:01.943 8281.367 - 8340.945: 1.0284% ( 1) 00:08:01.943 8400.524 - 8460.102: 1.0603% ( 4) 00:08:01.943 8460.102 - 8519.680: 1.1719% ( 14) 00:08:01.943 8519.680 - 8579.258: 1.3871% ( 27) 00:08:01.943 8579.258 - 8638.836: 1.8893% ( 63) 00:08:01.943 8638.836 - 8698.415: 2.7105% ( 103) 00:08:01.943 8698.415 - 8757.993: 3.8744% ( 146) 00:08:01.943 8757.993 - 8817.571: 5.4847% ( 202) 00:08:01.943 8817.571 - 8877.149: 7.5973% ( 265) 00:08:01.943 8877.149 - 8936.727: 10.2360% ( 331) 00:08:01.943 8936.727 - 8996.305: 12.9863% ( 345) 00:08:01.943 8996.305 - 9055.884: 16.4222% ( 431) 00:08:01.943 9055.884 - 9115.462: 19.8103% ( 425) 00:08:01.943 9115.462 - 9175.040: 23.5491% ( 469) 00:08:01.943 9175.040 - 9234.618: 27.0568% ( 440) 00:08:01.943 9234.618 - 9294.196: 30.4209% ( 422) 00:08:01.943 9294.196 - 9353.775: 33.6416% ( 404) 00:08:01.943 9353.775 - 9413.353: 37.0297% ( 425) 00:08:01.943 9413.353 - 9472.931: 40.4656% ( 431) 00:08:01.943 9472.931 - 9532.509: 43.4471% ( 374) 00:08:01.943 9532.509 - 9592.087: 46.5163% ( 385) 00:08:01.943 9592.087 - 9651.665: 49.5217% ( 377) 00:08:01.943 9651.665 - 9711.244: 52.1046% ( 324) 00:08:01.943 9711.244 - 9770.822: 54.6716% ( 322) 00:08:01.943 9770.822 - 9830.400: 56.8559% ( 274) 00:08:01.943 9830.400 - 9889.978: 58.6655% ( 227) 00:08:01.943 9889.978 - 9949.556: 60.4193% ( 220) 00:08:01.944 9949.556 - 10009.135: 62.0057% ( 199) 00:08:01.944 10009.135 - 10068.713: 63.5364% ( 192) 00:08:01.944 10068.713 - 10128.291: 64.9793% ( 181) 00:08:01.944 10128.291 - 10187.869: 66.4302% ( 182) 00:08:01.944 10187.869 - 10247.447: 67.7136% ( 161) 00:08:01.944 10247.447 - 10307.025: 69.0290% ( 165) 00:08:01.944 10307.025 - 10366.604: 70.1291% ( 138) 00:08:01.944 10366.604 - 10426.182: 71.4206% ( 162) 00:08:01.944 10426.182 - 10485.760: 72.7360% ( 165) 00:08:01.944 10485.760 - 10545.338: 73.9955% ( 158) 00:08:01.944 10545.338 - 10604.916: 75.1276% ( 142) 00:08:01.944 10604.916 - 10664.495: 76.1400% ( 127) 00:08:01.944 10664.495 - 10724.073: 77.1126% ( 122) 00:08:01.944 10724.073 - 10783.651: 77.9974% ( 111) 00:08:01.944 10783.651 - 10843.229: 78.6910% ( 87) 00:08:01.944 10843.229 - 10902.807: 79.4483% ( 95) 00:08:01.944 10902.807 - 10962.385: 80.1578% ( 89) 00:08:01.944 10962.385 - 11021.964: 80.7398% ( 73) 00:08:01.944 11021.964 - 11081.542: 81.2978% ( 70) 00:08:01.944 11081.542 - 11141.120: 81.9117% ( 77) 00:08:01.944 11141.120 - 11200.698: 82.5494% ( 80) 00:08:01.944 11200.698 - 11260.276: 83.3466% ( 100) 00:08:01.944 11260.276 - 11319.855: 84.1518% ( 101) 00:08:01.944 11319.855 - 11379.433: 84.8453% ( 87) 00:08:01.944 11379.433 - 11439.011: 85.4990% ( 82) 00:08:01.944 11439.011 - 11498.589: 86.2883% ( 99) 00:08:01.944 11498.589 - 11558.167: 87.1014% ( 102) 00:08:01.944 11558.167 - 11617.745: 87.5797% ( 60) 00:08:01.944 11617.745 - 11677.324: 88.0261% ( 56) 00:08:01.944 11677.324 - 11736.902: 88.6240% ( 75) 00:08:01.944 11736.902 - 11796.480: 89.1502% ( 66) 00:08:01.944 11796.480 - 11856.058: 89.6046% ( 57) 00:08:01.944 11856.058 - 11915.636: 90.1387% ( 67) 00:08:01.944 11915.636 - 11975.215: 90.7047% ( 71) 00:08:01.944 11975.215 - 12034.793: 91.1751% ( 59) 00:08:01.944 12034.793 - 12094.371: 91.5577% ( 48) 00:08:01.944 12094.371 - 12153.949: 91.9962% ( 55) 00:08:01.944 12153.949 - 12213.527: 92.3948% ( 50) 00:08:01.944 12213.527 - 12273.105: 92.7216% ( 41) 00:08:01.944 12273.105 - 12332.684: 93.0564% ( 42) 00:08:01.944 12332.684 - 12392.262: 93.5188% ( 58) 00:08:01.944 12392.262 - 12451.840: 93.7899% ( 34) 00:08:01.944 12451.840 - 12511.418: 94.0370% ( 31) 00:08:01.944 12511.418 - 12570.996: 94.2363% ( 25) 00:08:01.944 12570.996 - 12630.575: 94.4276% ( 24) 00:08:01.944 12630.575 - 12690.153: 94.5950% ( 21) 00:08:01.944 12690.153 - 12749.731: 94.7146% ( 15) 00:08:01.944 12749.731 - 12809.309: 94.8661% ( 19) 00:08:01.944 12809.309 - 12868.887: 95.0016% ( 17) 00:08:01.944 12868.887 - 12928.465: 95.1291% ( 16) 00:08:01.944 12928.465 - 12988.044: 95.2726% ( 18) 00:08:01.944 12988.044 - 13047.622: 95.3683% ( 12) 00:08:01.944 13047.622 - 13107.200: 95.5198% ( 19) 00:08:01.944 13107.200 - 13166.778: 95.6393% ( 15) 00:08:01.944 13166.778 - 13226.356: 95.7510% ( 14) 00:08:01.944 13226.356 - 13285.935: 95.8705% ( 15) 00:08:01.944 13285.935 - 13345.513: 95.9582% ( 11) 00:08:01.944 13345.513 - 13405.091: 96.0379% ( 10) 00:08:01.944 13405.091 - 13464.669: 96.2293% ( 24) 00:08:01.944 13464.669 - 13524.247: 96.3887% ( 20) 00:08:01.944 13524.247 - 13583.825: 96.5322% ( 18) 00:08:01.944 13583.825 - 13643.404: 96.6677% ( 17) 00:08:01.944 13643.404 - 13702.982: 96.8033% ( 17) 00:08:01.944 13702.982 - 13762.560: 96.9707% ( 21) 00:08:01.944 13762.560 - 13822.138: 97.1142% ( 18) 00:08:01.944 13822.138 - 13881.716: 97.2656% ( 19) 00:08:01.944 13881.716 - 13941.295: 97.4091% ( 18) 00:08:01.944 13941.295 - 14000.873: 97.5606% ( 19) 00:08:01.944 14000.873 - 14060.451: 97.6961% ( 17) 00:08:01.944 14060.451 - 14120.029: 97.8476% ( 19) 00:08:01.944 14120.029 - 14179.607: 97.9911% ( 18) 00:08:01.944 14179.607 - 14239.185: 98.1186% ( 16) 00:08:01.944 14239.185 - 14298.764: 98.2302% ( 14) 00:08:01.944 14298.764 - 14358.342: 98.3498% ( 15) 00:08:01.944 14358.342 - 14417.920: 98.4375% ( 11) 00:08:01.944 14417.920 - 14477.498: 98.5092% ( 9) 00:08:01.944 14477.498 - 14537.076: 98.5730% ( 8) 00:08:01.944 14537.076 - 14596.655: 98.6209% ( 6) 00:08:01.944 14596.655 - 14656.233: 98.6846% ( 8) 00:08:01.944 14656.233 - 14715.811: 98.7404% ( 7) 00:08:01.944 14715.811 - 14775.389: 98.7723% ( 4) 00:08:01.944 14775.389 - 14834.967: 98.8042% ( 4) 00:08:01.944 14834.967 - 14894.545: 98.8441% ( 5) 00:08:01.944 14894.545 - 14954.124: 98.8760% ( 4) 00:08:01.944 14954.124 - 15013.702: 98.9078% ( 4) 00:08:01.944 15013.702 - 15073.280: 98.9318% ( 3) 00:08:01.944 15073.280 - 15132.858: 98.9557% ( 3) 00:08:01.944 15132.858 - 15192.436: 98.9716% ( 2) 00:08:01.944 15192.436 - 15252.015: 98.9796% ( 1) 00:08:01.944 20971.520 - 21090.676: 99.0115% ( 4) 00:08:01.944 21090.676 - 21209.833: 99.0673% ( 7) 00:08:01.944 21209.833 - 21328.989: 99.1311% ( 8) 00:08:01.944 21328.989 - 21448.145: 99.1948% ( 8) 00:08:01.944 21448.145 - 21567.302: 99.2506% ( 7) 00:08:01.944 21567.302 - 21686.458: 99.3144% ( 8) 00:08:01.944 21686.458 - 21805.615: 99.3543% ( 5) 00:08:01.944 21805.615 - 21924.771: 99.3862% ( 4) 00:08:01.944 21924.771 - 22043.927: 99.4260% ( 5) 00:08:01.944 22043.927 - 22163.084: 99.4579% ( 4) 00:08:01.944 22163.084 - 22282.240: 99.4898% ( 4) 00:08:01.944 26929.338 - 27048.495: 99.5297% ( 5) 00:08:01.944 27048.495 - 27167.651: 99.5376% ( 1) 00:08:01.944 27525.120 - 27644.276: 99.5775% ( 5) 00:08:01.944 27644.276 - 27763.433: 99.6253% ( 6) 00:08:01.944 27763.433 - 27882.589: 99.6652% ( 5) 00:08:01.944 27882.589 - 28001.745: 99.7130% ( 6) 00:08:01.944 28001.745 - 28120.902: 99.7608% ( 6) 00:08:01.944 28120.902 - 28240.058: 99.8007% ( 5) 00:08:01.944 28240.058 - 28359.215: 99.8565% ( 7) 00:08:01.944 28359.215 - 28478.371: 99.8964% ( 5) 00:08:01.944 28478.371 - 28597.527: 99.9283% ( 4) 00:08:01.944 28597.527 - 28716.684: 99.9681% ( 5) 00:08:01.944 28716.684 - 28835.840: 100.0000% ( 4) 00:08:01.944 00:08:01.944 12:49:53 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:01.944 00:08:01.944 real 0m2.553s 00:08:01.944 user 0m2.223s 00:08:01.944 sys 0m0.216s 00:08:01.944 12:49:53 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.944 12:49:53 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:01.944 ************************************ 00:08:01.944 END TEST nvme_perf 00:08:01.944 ************************************ 00:08:01.944 12:49:53 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:01.944 12:49:53 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:01.944 12:49:53 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.944 12:49:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.944 ************************************ 00:08:01.944 START TEST nvme_hello_world 00:08:01.944 ************************************ 00:08:01.944 12:49:53 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:01.944 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:01.944 Initializing NVMe Controllers 00:08:01.944 Attached to 0000:00:10.0 00:08:01.944 Namespace ID: 1 size: 6GB 00:08:01.944 Attached to 0000:00:11.0 00:08:01.944 Namespace ID: 1 size: 5GB 00:08:01.944 Attached to 0000:00:13.0 00:08:01.944 Namespace ID: 1 size: 1GB 00:08:01.944 Attached to 0000:00:12.0 00:08:01.944 Namespace ID: 1 size: 4GB 00:08:01.944 Namespace ID: 2 size: 4GB 00:08:01.944 Namespace ID: 3 size: 4GB 00:08:01.944 Initialization complete. 00:08:01.944 INFO: using host memory buffer for IO 00:08:01.944 Hello world! 00:08:01.944 INFO: using host memory buffer for IO 00:08:01.944 Hello world! 00:08:01.944 INFO: using host memory buffer for IO 00:08:01.944 Hello world! 00:08:01.944 INFO: using host memory buffer for IO 00:08:01.944 Hello world! 00:08:01.944 INFO: using host memory buffer for IO 00:08:01.944 Hello world! 00:08:01.944 INFO: using host memory buffer for IO 00:08:01.944 Hello world! 00:08:01.944 00:08:01.944 real 0m0.258s 00:08:01.944 user 0m0.093s 00:08:01.944 sys 0m0.113s 00:08:01.944 12:49:53 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.944 12:49:53 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:01.944 ************************************ 00:08:01.944 END TEST nvme_hello_world 00:08:01.944 ************************************ 00:08:01.944 12:49:53 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:01.944 12:49:53 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:01.944 12:49:53 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.944 12:49:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.944 ************************************ 00:08:01.944 START TEST nvme_sgl 00:08:01.944 ************************************ 00:08:01.944 12:49:53 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:02.201 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:02.201 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:02.201 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:02.201 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:02.201 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:02.201 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:02.201 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:02.201 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:02.201 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:02.201 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:02.459 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:02.459 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:02.459 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:02.459 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:02.459 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:02.459 NVMe Readv/Writev Request test 00:08:02.459 Attached to 0000:00:10.0 00:08:02.459 Attached to 0000:00:11.0 00:08:02.459 Attached to 0000:00:13.0 00:08:02.459 Attached to 0000:00:12.0 00:08:02.459 0000:00:10.0: build_io_request_2 test passed 00:08:02.459 0000:00:10.0: build_io_request_4 test passed 00:08:02.459 0000:00:10.0: build_io_request_5 test passed 00:08:02.459 0000:00:10.0: build_io_request_6 test passed 00:08:02.459 0000:00:10.0: build_io_request_7 test passed 00:08:02.459 0000:00:10.0: build_io_request_10 test passed 00:08:02.459 0000:00:11.0: build_io_request_2 test passed 00:08:02.459 0000:00:11.0: build_io_request_4 test passed 00:08:02.459 0000:00:11.0: build_io_request_5 test passed 00:08:02.459 0000:00:11.0: build_io_request_6 test passed 00:08:02.459 0000:00:11.0: build_io_request_7 test passed 00:08:02.459 0000:00:11.0: build_io_request_10 test passed 00:08:02.459 Cleaning up... 00:08:02.459 00:08:02.459 real 0m0.339s 00:08:02.459 user 0m0.167s 00:08:02.459 sys 0m0.113s 00:08:02.459 12:49:53 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.459 12:49:53 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 ************************************ 00:08:02.459 END TEST nvme_sgl 00:08:02.459 ************************************ 00:08:02.459 12:49:53 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:02.459 12:49:53 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:02.459 12:49:53 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.459 12:49:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 ************************************ 00:08:02.459 START TEST nvme_e2edp 00:08:02.459 ************************************ 00:08:02.459 12:49:53 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:02.459 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:02.717 NVMe Write/Read with End-to-End data protection test 00:08:02.717 Attached to 0000:00:10.0 00:08:02.717 Attached to 0000:00:11.0 00:08:02.717 Attached to 0000:00:13.0 00:08:02.717 Attached to 0000:00:12.0 00:08:02.717 Cleaning up... 00:08:02.717 00:08:02.717 real 0m0.246s 00:08:02.717 user 0m0.092s 00:08:02.717 sys 0m0.111s 00:08:02.717 ************************************ 00:08:02.717 END TEST nvme_e2edp 00:08:02.717 ************************************ 00:08:02.717 12:49:54 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.717 12:49:54 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:02.717 12:49:54 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:02.717 12:49:54 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:02.717 12:49:54 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.717 12:49:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.717 ************************************ 00:08:02.717 START TEST nvme_reserve 00:08:02.717 ************************************ 00:08:02.717 12:49:54 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:02.717 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:02.975 ===================================================== 00:08:02.975 NVMe Controller at PCI bus 0, device 16, function 0 00:08:02.975 ===================================================== 00:08:02.975 Reservations: Not Supported 00:08:02.975 ===================================================== 00:08:02.975 NVMe Controller at PCI bus 0, device 17, function 0 00:08:02.975 ===================================================== 00:08:02.975 Reservations: Not Supported 00:08:02.975 ===================================================== 00:08:02.975 NVMe Controller at PCI bus 0, device 19, function 0 00:08:02.975 ===================================================== 00:08:02.975 Reservations: Not Supported 00:08:02.975 ===================================================== 00:08:02.975 NVMe Controller at PCI bus 0, device 18, function 0 00:08:02.975 ===================================================== 00:08:02.975 Reservations: Not Supported 00:08:02.975 Reservation test passed 00:08:02.975 ************************************ 00:08:02.975 END TEST nvme_reserve 00:08:02.975 ************************************ 00:08:02.975 00:08:02.975 real 0m0.246s 00:08:02.975 user 0m0.084s 00:08:02.975 sys 0m0.118s 00:08:02.975 12:49:54 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.975 12:49:54 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:02.975 12:49:54 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:02.975 12:49:54 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:02.975 12:49:54 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.975 12:49:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.975 ************************************ 00:08:02.975 START TEST nvme_err_injection 00:08:02.975 ************************************ 00:08:02.975 12:49:54 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:02.975 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:03.234 NVMe Error Injection test 00:08:03.234 Attached to 0000:00:10.0 00:08:03.234 Attached to 0000:00:11.0 00:08:03.234 Attached to 0000:00:13.0 00:08:03.234 Attached to 0000:00:12.0 00:08:03.234 0000:00:12.0: get features failed as expected 00:08:03.234 0000:00:10.0: get features failed as expected 00:08:03.234 0000:00:11.0: get features failed as expected 00:08:03.234 0000:00:13.0: get features failed as expected 00:08:03.234 0000:00:11.0: get features successfully as expected 00:08:03.234 0000:00:13.0: get features successfully as expected 00:08:03.234 0000:00:12.0: get features successfully as expected 00:08:03.234 0000:00:10.0: get features successfully as expected 00:08:03.234 0000:00:10.0: read failed as expected 00:08:03.234 0000:00:11.0: read failed as expected 00:08:03.234 0000:00:13.0: read failed as expected 00:08:03.234 0000:00:12.0: read failed as expected 00:08:03.234 0000:00:10.0: read successfully as expected 00:08:03.234 0000:00:11.0: read successfully as expected 00:08:03.234 0000:00:13.0: read successfully as expected 00:08:03.234 0000:00:12.0: read successfully as expected 00:08:03.234 Cleaning up... 00:08:03.234 00:08:03.234 real 0m0.252s 00:08:03.234 user 0m0.105s 00:08:03.234 sys 0m0.105s 00:08:03.234 ************************************ 00:08:03.234 END TEST nvme_err_injection 00:08:03.234 ************************************ 00:08:03.234 12:49:54 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.234 12:49:54 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:03.234 12:49:54 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:03.234 12:49:54 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:03.234 12:49:54 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.234 12:49:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.234 ************************************ 00:08:03.234 START TEST nvme_overhead 00:08:03.234 ************************************ 00:08:03.234 12:49:54 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:03.492 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:04.428 Initializing NVMe Controllers 00:08:04.428 Attached to 0000:00:10.0 00:08:04.428 Attached to 0000:00:11.0 00:08:04.428 Attached to 0000:00:13.0 00:08:04.428 Attached to 0000:00:12.0 00:08:04.428 Initialization complete. Launching workers. 00:08:04.428 submit (in ns) avg, min, max = 17654.2, 13161.4, 135721.8 00:08:04.428 complete (in ns) avg, min, max = 12298.5, 8857.3, 70798.6 00:08:04.428 00:08:04.428 Submit histogram 00:08:04.428 ================ 00:08:04.428 Range in us Cumulative Count 00:08:04.428 13.149 - 13.207: 0.0120% ( 1) 00:08:04.428 13.847 - 13.905: 0.0360% ( 2) 00:08:04.428 13.905 - 13.964: 0.1081% ( 6) 00:08:04.428 13.964 - 14.022: 0.2162% ( 9) 00:08:04.428 14.022 - 14.080: 0.5766% ( 30) 00:08:04.428 14.080 - 14.138: 1.1411% ( 47) 00:08:04.428 14.138 - 14.196: 2.0180% ( 73) 00:08:04.428 14.196 - 14.255: 3.3754% ( 113) 00:08:04.428 14.255 - 14.313: 5.2372% ( 155) 00:08:04.428 14.313 - 14.371: 7.1712% ( 161) 00:08:04.428 14.371 - 14.429: 9.3333% ( 180) 00:08:04.428 14.429 - 14.487: 11.7117% ( 198) 00:08:04.428 14.487 - 14.545: 14.8709% ( 263) 00:08:04.428 14.545 - 14.604: 18.8709% ( 333) 00:08:04.428 14.604 - 14.662: 23.0511% ( 348) 00:08:04.428 14.662 - 14.720: 27.7477% ( 391) 00:08:04.428 14.720 - 14.778: 32.1081% ( 363) 00:08:04.428 14.778 - 14.836: 35.7237% ( 301) 00:08:04.428 14.836 - 14.895: 38.8108% ( 257) 00:08:04.428 14.895 - 15.011: 43.1832% ( 364) 00:08:04.428 15.011 - 15.127: 45.8018% ( 218) 00:08:04.428 15.127 - 15.244: 48.0601% ( 188) 00:08:04.428 15.244 - 15.360: 50.3423% ( 190) 00:08:04.428 15.360 - 15.476: 53.4535% ( 259) 00:08:04.428 15.476 - 15.593: 56.0601% ( 217) 00:08:04.428 15.593 - 15.709: 58.1021% ( 170) 00:08:04.428 15.709 - 15.825: 59.4835% ( 115) 00:08:04.428 15.825 - 15.942: 60.3844% ( 75) 00:08:04.428 15.942 - 16.058: 61.0571% ( 56) 00:08:04.428 16.058 - 16.175: 61.6697% ( 51) 00:08:04.428 16.175 - 16.291: 62.1381% ( 39) 00:08:04.428 16.291 - 16.407: 62.5826% ( 37) 00:08:04.428 16.407 - 16.524: 63.0030% ( 35) 00:08:04.429 16.524 - 16.640: 63.4354% ( 36) 00:08:04.429 16.640 - 16.756: 63.7598% ( 27) 00:08:04.429 16.756 - 16.873: 64.0240% ( 22) 00:08:04.429 16.873 - 16.989: 64.3123% ( 24) 00:08:04.429 16.989 - 17.105: 64.4324% ( 10) 00:08:04.429 17.105 - 17.222: 64.4805% ( 4) 00:08:04.429 17.222 - 17.338: 64.5766% ( 8) 00:08:04.429 17.338 - 17.455: 64.6727% ( 8) 00:08:04.429 17.455 - 17.571: 64.7087% ( 3) 00:08:04.429 17.571 - 17.687: 64.7688% ( 5) 00:08:04.429 17.687 - 17.804: 64.8048% ( 3) 00:08:04.429 17.804 - 17.920: 64.8168% ( 1) 00:08:04.429 17.920 - 18.036: 64.9850% ( 14) 00:08:04.429 18.036 - 18.153: 66.0420% ( 88) 00:08:04.429 18.153 - 18.269: 68.3844% ( 195) 00:08:04.429 18.269 - 18.385: 72.1321% ( 312) 00:08:04.429 18.385 - 18.502: 75.4595% ( 277) 00:08:04.429 18.502 - 18.618: 78.1021% ( 220) 00:08:04.429 18.618 - 18.735: 79.6396% ( 128) 00:08:04.429 18.735 - 18.851: 80.5526% ( 76) 00:08:04.429 18.851 - 18.967: 81.2492% ( 58) 00:08:04.429 18.967 - 19.084: 82.0300% ( 65) 00:08:04.429 19.084 - 19.200: 82.7267% ( 58) 00:08:04.429 19.200 - 19.316: 83.2432% ( 43) 00:08:04.429 19.316 - 19.433: 83.5556% ( 26) 00:08:04.429 19.433 - 19.549: 83.9279% ( 31) 00:08:04.429 19.549 - 19.665: 84.3123% ( 32) 00:08:04.429 19.665 - 19.782: 84.5285% ( 18) 00:08:04.429 19.782 - 19.898: 84.7928% ( 22) 00:08:04.429 19.898 - 20.015: 84.8769% ( 7) 00:08:04.429 20.015 - 20.131: 84.9970% ( 10) 00:08:04.429 20.131 - 20.247: 85.1171% ( 10) 00:08:04.429 20.247 - 20.364: 85.2372% ( 10) 00:08:04.429 20.364 - 20.480: 85.3333% ( 8) 00:08:04.429 20.480 - 20.596: 85.4414% ( 9) 00:08:04.429 20.596 - 20.713: 85.6216% ( 15) 00:08:04.429 20.713 - 20.829: 85.8378% ( 18) 00:08:04.429 20.829 - 20.945: 85.9580% ( 10) 00:08:04.429 20.945 - 21.062: 86.1021% ( 12) 00:08:04.429 21.062 - 21.178: 86.2943% ( 16) 00:08:04.429 21.178 - 21.295: 86.5345% ( 20) 00:08:04.429 21.295 - 21.411: 86.7868% ( 21) 00:08:04.429 21.411 - 21.527: 87.0631% ( 23) 00:08:04.429 21.527 - 21.644: 87.3033% ( 20) 00:08:04.429 21.644 - 21.760: 87.5315% ( 19) 00:08:04.429 21.760 - 21.876: 87.6517% ( 10) 00:08:04.429 21.876 - 21.993: 87.7237% ( 6) 00:08:04.429 21.993 - 22.109: 87.8559% ( 11) 00:08:04.429 22.109 - 22.225: 88.0120% ( 13) 00:08:04.429 22.225 - 22.342: 88.0841% ( 6) 00:08:04.429 22.342 - 22.458: 88.2162% ( 11) 00:08:04.429 22.458 - 22.575: 88.2402% ( 2) 00:08:04.429 22.575 - 22.691: 88.3363% ( 8) 00:08:04.429 22.691 - 22.807: 88.4444% ( 9) 00:08:04.429 22.807 - 22.924: 88.5045% ( 5) 00:08:04.429 22.924 - 23.040: 88.6246% ( 10) 00:08:04.429 23.040 - 23.156: 88.6967% ( 6) 00:08:04.429 23.156 - 23.273: 88.7688% ( 6) 00:08:04.429 23.273 - 23.389: 88.8649% ( 8) 00:08:04.429 23.389 - 23.505: 88.9730% ( 9) 00:08:04.429 23.505 - 23.622: 89.0811% ( 9) 00:08:04.429 23.622 - 23.738: 89.1532% ( 6) 00:08:04.429 23.738 - 23.855: 89.2372% ( 7) 00:08:04.429 23.855 - 23.971: 89.2733% ( 3) 00:08:04.429 23.971 - 24.087: 89.2973% ( 2) 00:08:04.429 24.087 - 24.204: 89.3934% ( 8) 00:08:04.429 24.204 - 24.320: 89.4294% ( 3) 00:08:04.429 24.320 - 24.436: 89.4535% ( 2) 00:08:04.429 24.436 - 24.553: 89.5135% ( 5) 00:08:04.429 24.553 - 24.669: 89.5736% ( 5) 00:08:04.429 24.669 - 24.785: 89.6096% ( 3) 00:08:04.429 24.785 - 24.902: 89.6336% ( 2) 00:08:04.429 24.902 - 25.018: 89.6817% ( 4) 00:08:04.429 25.018 - 25.135: 89.7177% ( 3) 00:08:04.429 25.135 - 25.251: 89.7538% ( 3) 00:08:04.429 25.251 - 25.367: 89.8258% ( 6) 00:08:04.429 25.367 - 25.484: 89.8619% ( 3) 00:08:04.429 25.484 - 25.600: 89.9580% ( 8) 00:08:04.429 25.600 - 25.716: 90.0661% ( 9) 00:08:04.429 25.716 - 25.833: 90.1502% ( 7) 00:08:04.429 25.833 - 25.949: 90.2583% ( 9) 00:08:04.429 25.949 - 26.065: 90.3303% ( 6) 00:08:04.429 26.065 - 26.182: 90.4264% ( 8) 00:08:04.429 26.182 - 26.298: 90.5586% ( 11) 00:08:04.429 26.298 - 26.415: 90.6306% ( 6) 00:08:04.429 26.415 - 26.531: 90.7988% ( 14) 00:08:04.429 26.531 - 26.647: 90.8829% ( 7) 00:08:04.429 26.647 - 26.764: 90.9550% ( 6) 00:08:04.429 26.764 - 26.880: 91.0270% ( 6) 00:08:04.429 26.880 - 26.996: 91.0631% ( 3) 00:08:04.429 26.996 - 27.113: 91.0991% ( 3) 00:08:04.429 27.113 - 27.229: 91.1351% ( 3) 00:08:04.429 27.229 - 27.345: 91.1592% ( 2) 00:08:04.429 27.345 - 27.462: 91.2312% ( 6) 00:08:04.429 27.578 - 27.695: 91.2673% ( 3) 00:08:04.429 27.811 - 27.927: 91.3153% ( 4) 00:08:04.429 27.927 - 28.044: 91.3754% ( 5) 00:08:04.429 28.044 - 28.160: 91.4114% ( 3) 00:08:04.429 28.160 - 28.276: 91.4354% ( 2) 00:08:04.429 28.276 - 28.393: 91.4835% ( 4) 00:08:04.429 28.393 - 28.509: 91.5315% ( 4) 00:08:04.429 28.509 - 28.625: 91.5796% ( 4) 00:08:04.429 28.625 - 28.742: 91.6396% ( 5) 00:08:04.429 28.742 - 28.858: 91.6517% ( 1) 00:08:04.429 28.858 - 28.975: 91.7477% ( 8) 00:08:04.429 28.975 - 29.091: 91.9039% ( 13) 00:08:04.429 29.091 - 29.207: 92.0721% ( 14) 00:08:04.429 29.207 - 29.324: 92.4444% ( 31) 00:08:04.429 29.324 - 29.440: 92.8889% ( 37) 00:08:04.429 29.440 - 29.556: 93.5135% ( 52) 00:08:04.429 29.556 - 29.673: 94.2703% ( 63) 00:08:04.429 29.673 - 29.789: 94.8709% ( 50) 00:08:04.429 29.789 - 30.022: 96.2282% ( 113) 00:08:04.429 30.022 - 30.255: 96.9850% ( 63) 00:08:04.429 30.255 - 30.487: 97.6336% ( 54) 00:08:04.429 30.487 - 30.720: 98.0420% ( 34) 00:08:04.429 30.720 - 30.953: 98.3183% ( 23) 00:08:04.429 30.953 - 31.185: 98.4384% ( 10) 00:08:04.429 31.185 - 31.418: 98.4985% ( 5) 00:08:04.429 31.418 - 31.651: 98.5586% ( 5) 00:08:04.429 31.651 - 31.884: 98.6426% ( 7) 00:08:04.429 31.884 - 32.116: 98.6787% ( 3) 00:08:04.429 32.116 - 32.349: 98.6907% ( 1) 00:08:04.429 32.349 - 32.582: 98.7267% ( 3) 00:08:04.429 32.582 - 32.815: 98.7387% ( 1) 00:08:04.429 32.815 - 33.047: 98.7508% ( 1) 00:08:04.429 33.047 - 33.280: 98.7868% ( 3) 00:08:04.429 33.280 - 33.513: 98.8228% ( 3) 00:08:04.429 33.513 - 33.745: 98.8468% ( 2) 00:08:04.429 33.745 - 33.978: 98.8949% ( 4) 00:08:04.429 33.978 - 34.211: 98.9189% ( 2) 00:08:04.429 34.676 - 34.909: 98.9309% ( 1) 00:08:04.429 34.909 - 35.142: 98.9670% ( 3) 00:08:04.429 35.142 - 35.375: 98.9790% ( 1) 00:08:04.429 35.375 - 35.607: 98.9910% ( 1) 00:08:04.429 35.840 - 36.073: 99.0030% ( 1) 00:08:04.429 36.073 - 36.305: 99.0511% ( 4) 00:08:04.429 36.305 - 36.538: 99.0751% ( 2) 00:08:04.429 36.538 - 36.771: 99.1592% ( 7) 00:08:04.429 36.771 - 37.004: 99.2192% ( 5) 00:08:04.429 37.004 - 37.236: 99.2432% ( 2) 00:08:04.429 37.236 - 37.469: 99.2793% ( 3) 00:08:04.429 37.469 - 37.702: 99.2913% ( 1) 00:08:04.429 37.702 - 37.935: 99.3273% ( 3) 00:08:04.429 37.935 - 38.167: 99.3634% ( 3) 00:08:04.429 38.167 - 38.400: 99.3994% ( 3) 00:08:04.429 38.400 - 38.633: 99.4114% ( 1) 00:08:04.429 38.633 - 38.865: 99.4234% ( 1) 00:08:04.429 38.865 - 39.098: 99.4474% ( 2) 00:08:04.429 39.098 - 39.331: 99.4955% ( 4) 00:08:04.429 39.331 - 39.564: 99.5075% ( 1) 00:08:04.429 39.564 - 39.796: 99.5435% ( 3) 00:08:04.429 40.029 - 40.262: 99.5556% ( 1) 00:08:04.429 40.262 - 40.495: 99.5676% ( 1) 00:08:04.429 40.727 - 40.960: 99.5796% ( 1) 00:08:04.429 41.658 - 41.891: 99.5916% ( 1) 00:08:04.429 42.124 - 42.356: 99.6036% ( 1) 00:08:04.429 44.684 - 44.916: 99.6156% ( 1) 00:08:04.429 44.916 - 45.149: 99.6517% ( 3) 00:08:04.429 45.149 - 45.382: 99.6757% ( 2) 00:08:04.429 45.382 - 45.615: 99.7117% ( 3) 00:08:04.429 45.615 - 45.847: 99.7357% ( 2) 00:08:04.429 45.847 - 46.080: 99.7718% ( 3) 00:08:04.429 46.080 - 46.313: 99.8078% ( 3) 00:08:04.430 46.313 - 46.545: 99.8198% ( 1) 00:08:04.430 46.545 - 46.778: 99.8318% ( 1) 00:08:04.430 47.244 - 47.476: 99.8438% ( 1) 00:08:04.430 47.476 - 47.709: 99.8799% ( 3) 00:08:04.430 47.709 - 47.942: 99.8919% ( 1) 00:08:04.430 50.735 - 50.967: 99.9039% ( 1) 00:08:04.430 51.898 - 52.131: 99.9159% ( 1) 00:08:04.430 52.829 - 53.062: 99.9279% ( 1) 00:08:04.430 54.691 - 54.924: 99.9399% ( 1) 00:08:04.430 55.855 - 56.087: 99.9520% ( 1) 00:08:04.430 58.880 - 59.113: 99.9640% ( 1) 00:08:04.430 61.905 - 62.371: 99.9760% ( 1) 00:08:04.430 119.156 - 120.087: 99.9880% ( 1) 00:08:04.430 134.982 - 135.913: 100.0000% ( 1) 00:08:04.430 00:08:04.430 Complete histogram 00:08:04.430 ================== 00:08:04.430 Range in us Cumulative Count 00:08:04.430 8.844 - 8.902: 0.0601% ( 5) 00:08:04.430 8.902 - 8.960: 0.2523% ( 16) 00:08:04.430 8.960 - 9.018: 0.5285% ( 23) 00:08:04.430 9.018 - 9.076: 1.1171% ( 49) 00:08:04.430 9.076 - 9.135: 2.1141% ( 83) 00:08:04.430 9.135 - 9.193: 2.9069% ( 66) 00:08:04.430 9.193 - 9.251: 4.3483% ( 120) 00:08:04.430 9.251 - 9.309: 6.4505% ( 175) 00:08:04.430 9.309 - 9.367: 9.6937% ( 270) 00:08:04.430 9.367 - 9.425: 14.3303% ( 386) 00:08:04.430 9.425 - 9.484: 19.8919% ( 463) 00:08:04.430 9.484 - 9.542: 25.3453% ( 454) 00:08:04.430 9.542 - 9.600: 30.6426% ( 441) 00:08:04.430 9.600 - 9.658: 34.8709% ( 352) 00:08:04.430 9.658 - 9.716: 38.4024% ( 294) 00:08:04.430 9.716 - 9.775: 41.2733% ( 239) 00:08:04.430 9.775 - 9.833: 43.6396% ( 197) 00:08:04.430 9.833 - 9.891: 45.4535% ( 151) 00:08:04.430 9.891 - 9.949: 47.1592% ( 142) 00:08:04.430 9.949 - 10.007: 49.1532% ( 166) 00:08:04.430 10.007 - 10.065: 50.8829% ( 144) 00:08:04.430 10.065 - 10.124: 52.4805% ( 133) 00:08:04.430 10.124 - 10.182: 53.7417% ( 105) 00:08:04.430 10.182 - 10.240: 54.6426% ( 75) 00:08:04.430 10.240 - 10.298: 55.2072% ( 47) 00:08:04.430 10.298 - 10.356: 55.7958% ( 49) 00:08:04.430 10.356 - 10.415: 56.1562% ( 30) 00:08:04.430 10.415 - 10.473: 56.5526% ( 33) 00:08:04.430 10.473 - 10.531: 57.0931% ( 45) 00:08:04.430 10.531 - 10.589: 57.7177% ( 52) 00:08:04.430 10.589 - 10.647: 58.3664% ( 54) 00:08:04.430 10.647 - 10.705: 58.8829% ( 43) 00:08:04.430 10.705 - 10.764: 59.3033% ( 35) 00:08:04.430 10.764 - 10.822: 59.6396% ( 28) 00:08:04.430 10.822 - 10.880: 59.9640% ( 27) 00:08:04.430 10.880 - 10.938: 60.1682% ( 17) 00:08:04.430 10.938 - 10.996: 60.3844% ( 18) 00:08:04.430 10.996 - 11.055: 60.7207% ( 28) 00:08:04.430 11.055 - 11.113: 61.0210% ( 25) 00:08:04.430 11.113 - 11.171: 61.1772% ( 13) 00:08:04.430 11.171 - 11.229: 61.3213% ( 12) 00:08:04.430 11.229 - 11.287: 61.4895% ( 14) 00:08:04.430 11.287 - 11.345: 61.6096% ( 10) 00:08:04.430 11.345 - 11.404: 61.7417% ( 11) 00:08:04.430 11.404 - 11.462: 61.9219% ( 15) 00:08:04.430 11.462 - 11.520: 62.0300% ( 9) 00:08:04.430 11.520 - 11.578: 62.1381% ( 9) 00:08:04.430 11.578 - 11.636: 62.2703% ( 11) 00:08:04.430 11.636 - 11.695: 62.3664% ( 8) 00:08:04.430 11.695 - 11.753: 62.4865% ( 10) 00:08:04.430 11.753 - 11.811: 62.7868% ( 25) 00:08:04.430 11.811 - 11.869: 63.3273% ( 45) 00:08:04.430 11.869 - 11.927: 64.4324% ( 92) 00:08:04.430 11.927 - 11.985: 66.3904% ( 163) 00:08:04.430 11.985 - 12.044: 69.4054% ( 251) 00:08:04.430 12.044 - 12.102: 72.7688% ( 280) 00:08:04.430 12.102 - 12.160: 75.9159% ( 262) 00:08:04.430 12.160 - 12.218: 78.1261% ( 184) 00:08:04.430 12.218 - 12.276: 79.6036% ( 123) 00:08:04.430 12.276 - 12.335: 80.4324% ( 69) 00:08:04.430 12.335 - 12.393: 80.8649% ( 36) 00:08:04.430 12.393 - 12.451: 81.1291% ( 22) 00:08:04.430 12.451 - 12.509: 81.3934% ( 22) 00:08:04.430 12.509 - 12.567: 81.4655% ( 6) 00:08:04.430 12.567 - 12.625: 81.5495% ( 7) 00:08:04.430 12.625 - 12.684: 81.6336% ( 7) 00:08:04.430 12.684 - 12.742: 81.7417% ( 9) 00:08:04.430 12.742 - 12.800: 81.8138% ( 6) 00:08:04.430 12.800 - 12.858: 81.9820% ( 14) 00:08:04.430 12.858 - 12.916: 82.1381% ( 13) 00:08:04.430 12.916 - 12.975: 82.3063% ( 14) 00:08:04.430 12.975 - 13.033: 82.5105% ( 17) 00:08:04.430 13.033 - 13.091: 82.7988% ( 24) 00:08:04.430 13.091 - 13.149: 83.1111% ( 26) 00:08:04.430 13.149 - 13.207: 83.5556% ( 37) 00:08:04.430 13.207 - 13.265: 83.8078% ( 21) 00:08:04.430 13.265 - 13.324: 84.0961% ( 24) 00:08:04.430 13.324 - 13.382: 84.2763% ( 15) 00:08:04.430 13.382 - 13.440: 84.4565% ( 15) 00:08:04.430 13.440 - 13.498: 84.5886% ( 11) 00:08:04.430 13.498 - 13.556: 84.7087% ( 10) 00:08:04.430 13.556 - 13.615: 84.7808% ( 6) 00:08:04.430 13.615 - 13.673: 84.8408% ( 5) 00:08:04.430 13.673 - 13.731: 84.8769% ( 3) 00:08:04.430 13.731 - 13.789: 84.9009% ( 2) 00:08:04.430 13.789 - 13.847: 84.9369% ( 3) 00:08:04.430 13.847 - 13.905: 84.9970% ( 5) 00:08:04.430 13.905 - 13.964: 85.0571% ( 5) 00:08:04.430 13.964 - 14.022: 85.0811% ( 2) 00:08:04.430 14.022 - 14.080: 85.1051% ( 2) 00:08:04.430 14.080 - 14.138: 85.1291% ( 2) 00:08:04.430 14.138 - 14.196: 85.1411% ( 1) 00:08:04.430 14.196 - 14.255: 85.2012% ( 5) 00:08:04.430 14.255 - 14.313: 85.2372% ( 3) 00:08:04.430 14.313 - 14.371: 85.2492% ( 1) 00:08:04.430 14.371 - 14.429: 85.3213% ( 6) 00:08:04.430 14.429 - 14.487: 85.3694% ( 4) 00:08:04.430 14.487 - 14.545: 85.3814% ( 1) 00:08:04.430 14.545 - 14.604: 85.4294% ( 4) 00:08:04.430 14.604 - 14.662: 85.4535% ( 2) 00:08:04.430 14.662 - 14.720: 85.4895% ( 3) 00:08:04.430 14.720 - 14.778: 85.5375% ( 4) 00:08:04.430 14.778 - 14.836: 85.5736% ( 3) 00:08:04.430 14.836 - 14.895: 85.6096% ( 3) 00:08:04.430 14.895 - 15.011: 85.7898% ( 15) 00:08:04.430 15.011 - 15.127: 86.0661% ( 23) 00:08:04.430 15.127 - 15.244: 86.1622% ( 8) 00:08:04.430 15.244 - 15.360: 86.2102% ( 4) 00:08:04.430 15.360 - 15.476: 86.2462% ( 3) 00:08:04.430 15.476 - 15.593: 86.3904% ( 12) 00:08:04.430 15.593 - 15.709: 86.4505% ( 5) 00:08:04.430 15.709 - 15.825: 86.5465% ( 8) 00:08:04.430 15.825 - 15.942: 86.6667% ( 10) 00:08:04.430 15.942 - 16.058: 86.7387% ( 6) 00:08:04.430 16.058 - 16.175: 86.8348% ( 8) 00:08:04.430 16.175 - 16.291: 86.9429% ( 9) 00:08:04.430 16.291 - 16.407: 87.0390% ( 8) 00:08:04.430 16.407 - 16.524: 87.1111% ( 6) 00:08:04.430 16.524 - 16.640: 87.1592% ( 4) 00:08:04.430 16.640 - 16.756: 87.2673% ( 9) 00:08:04.430 16.756 - 16.873: 87.3393% ( 6) 00:08:04.430 16.873 - 16.989: 87.4354% ( 8) 00:08:04.430 16.989 - 17.105: 87.5075% ( 6) 00:08:04.430 17.105 - 17.222: 87.6276% ( 10) 00:08:04.430 17.222 - 17.338: 87.7237% ( 8) 00:08:04.430 17.338 - 17.455: 87.7958% ( 6) 00:08:04.430 17.455 - 17.571: 87.8799% ( 7) 00:08:04.430 17.571 - 17.687: 87.9159% ( 3) 00:08:04.430 17.687 - 17.804: 87.9760% ( 5) 00:08:04.430 17.804 - 17.920: 88.0480% ( 6) 00:08:04.430 17.920 - 18.036: 88.1201% ( 6) 00:08:04.430 18.036 - 18.153: 88.1562% ( 3) 00:08:04.430 18.153 - 18.269: 88.2042% ( 4) 00:08:04.430 18.269 - 18.385: 88.2643% ( 5) 00:08:04.430 18.385 - 18.502: 88.3123% ( 4) 00:08:04.430 18.502 - 18.618: 88.3724% ( 5) 00:08:04.430 18.618 - 18.735: 88.4204% ( 4) 00:08:04.430 18.735 - 18.851: 88.4444% ( 2) 00:08:04.430 18.851 - 18.967: 88.4565% ( 1) 00:08:04.430 18.967 - 19.084: 88.4925% ( 3) 00:08:04.430 19.084 - 19.200: 88.5646% ( 6) 00:08:04.430 19.200 - 19.316: 88.6006% ( 3) 00:08:04.430 19.316 - 19.433: 88.6246% ( 2) 00:08:04.430 19.433 - 19.549: 88.7087% ( 7) 00:08:04.430 19.549 - 19.665: 88.7207% ( 1) 00:08:04.430 19.665 - 19.782: 88.8529% ( 11) 00:08:04.430 19.782 - 19.898: 88.9129% ( 5) 00:08:04.430 19.898 - 20.015: 88.9369% ( 2) 00:08:04.430 20.015 - 20.131: 88.9610% ( 2) 00:08:04.430 20.131 - 20.247: 89.0330% ( 6) 00:08:04.430 20.247 - 20.364: 89.0931% ( 5) 00:08:04.430 20.364 - 20.480: 89.1652% ( 6) 00:08:04.430 20.480 - 20.596: 89.2372% ( 6) 00:08:04.431 20.596 - 20.713: 89.3093% ( 6) 00:08:04.431 20.713 - 20.829: 89.3574% ( 4) 00:08:04.431 20.829 - 20.945: 89.4054% ( 4) 00:08:04.431 20.945 - 21.062: 89.4535% ( 4) 00:08:04.431 21.062 - 21.178: 89.5255% ( 6) 00:08:04.431 21.178 - 21.295: 89.5495% ( 2) 00:08:04.431 21.295 - 21.411: 89.6096% ( 5) 00:08:04.431 21.411 - 21.527: 89.6577% ( 4) 00:08:04.431 21.527 - 21.644: 89.7658% ( 9) 00:08:04.431 21.644 - 21.760: 89.8498% ( 7) 00:08:04.431 21.760 - 21.876: 89.8979% ( 4) 00:08:04.431 21.876 - 21.993: 89.9219% ( 2) 00:08:04.431 21.993 - 22.109: 89.9580% ( 3) 00:08:04.431 22.109 - 22.225: 89.9820% ( 2) 00:08:04.431 22.225 - 22.342: 90.0300% ( 4) 00:08:04.431 22.342 - 22.458: 90.0781% ( 4) 00:08:04.431 22.458 - 22.575: 90.1021% ( 2) 00:08:04.431 22.575 - 22.691: 90.1261% ( 2) 00:08:04.431 22.691 - 22.807: 90.1502% ( 2) 00:08:04.431 22.807 - 22.924: 90.1862% ( 3) 00:08:04.431 22.924 - 23.040: 90.1982% ( 1) 00:08:04.431 23.040 - 23.156: 90.2222% ( 2) 00:08:04.431 23.156 - 23.273: 90.2462% ( 2) 00:08:04.431 23.273 - 23.389: 90.2583% ( 1) 00:08:04.431 23.389 - 23.505: 90.2703% ( 1) 00:08:04.431 23.505 - 23.622: 90.2943% ( 2) 00:08:04.431 23.622 - 23.738: 90.3183% ( 2) 00:08:04.431 23.738 - 23.855: 90.3664% ( 4) 00:08:04.431 23.855 - 23.971: 90.4865% ( 10) 00:08:04.431 23.971 - 24.087: 90.8468% ( 30) 00:08:04.431 24.087 - 24.204: 91.3634% ( 43) 00:08:04.431 24.204 - 24.320: 92.1802% ( 68) 00:08:04.431 24.320 - 24.436: 93.1291% ( 79) 00:08:04.431 24.436 - 24.553: 94.0300% ( 75) 00:08:04.431 24.553 - 24.669: 94.8228% ( 66) 00:08:04.431 24.669 - 24.785: 95.7598% ( 78) 00:08:04.431 24.785 - 24.902: 96.5045% ( 62) 00:08:04.431 24.902 - 25.018: 96.9249% ( 35) 00:08:04.431 25.018 - 25.135: 97.2853% ( 30) 00:08:04.431 25.135 - 25.251: 97.4655% ( 15) 00:08:04.431 25.251 - 25.367: 97.7297% ( 22) 00:08:04.431 25.367 - 25.484: 97.9219% ( 16) 00:08:04.431 25.484 - 25.600: 98.0781% ( 13) 00:08:04.431 25.600 - 25.716: 98.2222% ( 12) 00:08:04.431 25.716 - 25.833: 98.3664% ( 12) 00:08:04.431 25.833 - 25.949: 98.4985% ( 11) 00:08:04.431 25.949 - 26.065: 98.5826% ( 7) 00:08:04.431 26.065 - 26.182: 98.6306% ( 4) 00:08:04.431 26.182 - 26.298: 98.6667% ( 3) 00:08:04.431 26.298 - 26.415: 98.7147% ( 4) 00:08:04.431 26.415 - 26.531: 98.7868% ( 6) 00:08:04.431 26.647 - 26.764: 98.8468% ( 5) 00:08:04.431 26.764 - 26.880: 98.8589% ( 1) 00:08:04.431 26.880 - 26.996: 98.8709% ( 1) 00:08:04.431 26.996 - 27.113: 98.8829% ( 1) 00:08:04.431 27.113 - 27.229: 98.9069% ( 2) 00:08:04.431 27.229 - 27.345: 98.9550% ( 4) 00:08:04.431 27.462 - 27.578: 98.9670% ( 1) 00:08:04.431 27.695 - 27.811: 99.0150% ( 4) 00:08:04.431 28.044 - 28.160: 99.0390% ( 2) 00:08:04.431 28.160 - 28.276: 99.0871% ( 4) 00:08:04.690 28.276 - 28.393: 99.1111% ( 2) 00:08:04.690 28.742 - 28.858: 99.1231% ( 1) 00:08:04.690 29.324 - 29.440: 99.1351% ( 1) 00:08:04.690 29.673 - 29.789: 99.1471% ( 1) 00:08:04.690 29.789 - 30.022: 99.1592% ( 1) 00:08:04.690 30.255 - 30.487: 99.1712% ( 1) 00:08:04.690 30.487 - 30.720: 99.1952% ( 2) 00:08:04.690 30.720 - 30.953: 99.2312% ( 3) 00:08:04.690 30.953 - 31.185: 99.2553% ( 2) 00:08:04.690 31.185 - 31.418: 99.2793% ( 2) 00:08:04.690 31.418 - 31.651: 99.3273% ( 4) 00:08:04.690 31.651 - 31.884: 99.3754% ( 4) 00:08:04.690 31.884 - 32.116: 99.4595% ( 7) 00:08:04.690 32.116 - 32.349: 99.5195% ( 5) 00:08:04.690 32.349 - 32.582: 99.5556% ( 3) 00:08:04.690 32.582 - 32.815: 99.6396% ( 7) 00:08:04.690 32.815 - 33.047: 99.6517% ( 1) 00:08:04.690 33.047 - 33.280: 99.6637% ( 1) 00:08:04.690 33.280 - 33.513: 99.6757% ( 1) 00:08:04.690 33.513 - 33.745: 99.6997% ( 2) 00:08:04.690 33.745 - 33.978: 99.7357% ( 3) 00:08:04.690 33.978 - 34.211: 99.7598% ( 2) 00:08:04.690 34.211 - 34.444: 99.7718% ( 1) 00:08:04.690 34.444 - 34.676: 99.7838% ( 1) 00:08:04.690 37.236 - 37.469: 99.7958% ( 1) 00:08:04.690 38.633 - 38.865: 99.8198% ( 2) 00:08:04.690 39.098 - 39.331: 99.8318% ( 1) 00:08:04.690 40.262 - 40.495: 99.8438% ( 1) 00:08:04.690 40.727 - 40.960: 99.8679% ( 2) 00:08:04.690 41.193 - 41.425: 99.8799% ( 1) 00:08:04.690 43.055 - 43.287: 99.8919% ( 1) 00:08:04.690 45.149 - 45.382: 99.9039% ( 1) 00:08:04.690 46.778 - 47.011: 99.9279% ( 2) 00:08:04.690 47.709 - 47.942: 99.9399% ( 1) 00:08:04.690 49.571 - 49.804: 99.9520% ( 1) 00:08:04.690 50.269 - 50.502: 99.9640% ( 1) 00:08:04.690 54.691 - 54.924: 99.9760% ( 1) 00:08:04.690 60.975 - 61.440: 99.9880% ( 1) 00:08:04.690 70.749 - 71.215: 100.0000% ( 1) 00:08:04.690 00:08:04.690 ************************************ 00:08:04.690 END TEST nvme_overhead 00:08:04.690 ************************************ 00:08:04.690 00:08:04.690 real 0m1.248s 00:08:04.690 user 0m1.096s 00:08:04.690 sys 0m0.103s 00:08:04.690 12:49:56 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.690 12:49:56 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:04.690 12:49:56 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:04.690 12:49:56 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:04.690 12:49:56 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.690 12:49:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.690 ************************************ 00:08:04.690 START TEST nvme_arbitration 00:08:04.690 ************************************ 00:08:04.690 12:49:56 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:04.690 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:07.977 Initializing NVMe Controllers 00:08:07.977 Attached to 0000:00:10.0 00:08:07.977 Attached to 0000:00:11.0 00:08:07.977 Attached to 0000:00:13.0 00:08:07.977 Attached to 0000:00:12.0 00:08:07.977 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:07.977 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:07.977 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:07.977 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:07.977 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:07.977 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:07.977 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:07.977 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:07.977 Initialization complete. Launching workers. 00:08:07.977 Starting thread on core 1 with urgent priority queue 00:08:07.977 Starting thread on core 2 with urgent priority queue 00:08:07.977 Starting thread on core 3 with urgent priority queue 00:08:07.977 Starting thread on core 0 with urgent priority queue 00:08:07.977 QEMU NVMe Ctrl (12340 ) core 0: 5034.67 IO/s 19.86 secs/100000 ios 00:08:07.977 QEMU NVMe Ctrl (12342 ) core 0: 5034.67 IO/s 19.86 secs/100000 ios 00:08:07.977 QEMU NVMe Ctrl (12341 ) core 1: 5034.67 IO/s 19.86 secs/100000 ios 00:08:07.977 QEMU NVMe Ctrl (12342 ) core 1: 5034.67 IO/s 19.86 secs/100000 ios 00:08:07.977 QEMU NVMe Ctrl (12343 ) core 2: 5141.33 IO/s 19.45 secs/100000 ios 00:08:07.977 QEMU NVMe Ctrl (12342 ) core 3: 4992.00 IO/s 20.03 secs/100000 ios 00:08:07.977 ======================================================== 00:08:07.977 00:08:07.977 00:08:07.977 real 0m3.274s 00:08:07.977 user 0m9.032s 00:08:07.977 sys 0m0.148s 00:08:07.977 12:49:59 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.977 12:49:59 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:07.977 ************************************ 00:08:07.977 END TEST nvme_arbitration 00:08:07.977 ************************************ 00:08:07.977 12:49:59 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:07.977 12:49:59 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:07.977 12:49:59 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.977 12:49:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.977 ************************************ 00:08:07.977 START TEST nvme_single_aen 00:08:07.977 ************************************ 00:08:07.977 12:49:59 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:07.977 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:08.236 Asynchronous Event Request test 00:08:08.236 Attached to 0000:00:10.0 00:08:08.236 Attached to 0000:00:11.0 00:08:08.236 Attached to 0000:00:13.0 00:08:08.236 Attached to 0000:00:12.0 00:08:08.236 Reset controller to setup AER completions for this process 00:08:08.236 Registering asynchronous event callbacks... 00:08:08.236 Getting orig temperature thresholds of all controllers 00:08:08.236 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.236 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.236 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.236 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:08.236 Setting all controllers temperature threshold low to trigger AER 00:08:08.236 Waiting for all controllers temperature threshold to be set lower 00:08:08.236 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.236 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:08.236 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.236 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:08.236 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.236 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:08.236 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:08.236 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:08.236 Waiting for all controllers to trigger AER and reset threshold 00:08:08.236 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.236 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.236 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.236 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.236 Cleaning up... 00:08:08.236 00:08:08.236 real 0m0.267s 00:08:08.236 user 0m0.105s 00:08:08.236 sys 0m0.116s 00:08:08.236 12:49:59 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:08.236 ************************************ 00:08:08.236 END TEST nvme_single_aen 00:08:08.236 ************************************ 00:08:08.236 12:49:59 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:08.236 12:49:59 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:08.236 12:49:59 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:08.236 12:49:59 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.236 12:49:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.236 ************************************ 00:08:08.236 START TEST nvme_doorbell_aers 00:08:08.236 ************************************ 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:08.236 12:49:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:08.494 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:08.494 [2024-08-11 12:50:00.039403] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:18.467 Executing: test_write_invalid_db 00:08:18.467 Waiting for AER completion... 00:08:18.467 Failure: test_write_invalid_db 00:08:18.467 00:08:18.467 Executing: test_invalid_db_write_overflow_sq 00:08:18.467 Waiting for AER completion... 00:08:18.467 Failure: test_invalid_db_write_overflow_sq 00:08:18.467 00:08:18.467 Executing: test_invalid_db_write_overflow_cq 00:08:18.467 Waiting for AER completion... 00:08:18.467 Failure: test_invalid_db_write_overflow_cq 00:08:18.467 00:08:18.467 12:50:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:18.467 12:50:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:18.467 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:18.725 [2024-08-11 12:50:10.083446] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:28.701 Executing: test_write_invalid_db 00:08:28.701 Waiting for AER completion... 00:08:28.701 Failure: test_write_invalid_db 00:08:28.701 00:08:28.701 Executing: test_invalid_db_write_overflow_sq 00:08:28.701 Waiting for AER completion... 00:08:28.701 Failure: test_invalid_db_write_overflow_sq 00:08:28.701 00:08:28.701 Executing: test_invalid_db_write_overflow_cq 00:08:28.701 Waiting for AER completion... 00:08:28.701 Failure: test_invalid_db_write_overflow_cq 00:08:28.701 00:08:28.701 12:50:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:28.701 12:50:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:28.701 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:28.701 [2024-08-11 12:50:20.090368] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:38.669 Executing: test_write_invalid_db 00:08:38.669 Waiting for AER completion... 00:08:38.669 Failure: test_write_invalid_db 00:08:38.669 00:08:38.669 Executing: test_invalid_db_write_overflow_sq 00:08:38.669 Waiting for AER completion... 00:08:38.669 Failure: test_invalid_db_write_overflow_sq 00:08:38.669 00:08:38.669 Executing: test_invalid_db_write_overflow_cq 00:08:38.669 Waiting for AER completion... 00:08:38.669 Failure: test_invalid_db_write_overflow_cq 00:08:38.669 00:08:38.669 12:50:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:38.669 12:50:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:38.669 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:38.669 [2024-08-11 12:50:30.134731] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 Executing: test_write_invalid_db 00:08:48.634 Waiting for AER completion... 00:08:48.634 Failure: test_write_invalid_db 00:08:48.634 00:08:48.634 Executing: test_invalid_db_write_overflow_sq 00:08:48.634 Waiting for AER completion... 00:08:48.634 Failure: test_invalid_db_write_overflow_sq 00:08:48.634 00:08:48.634 Executing: test_invalid_db_write_overflow_cq 00:08:48.634 Waiting for AER completion... 00:08:48.634 Failure: test_invalid_db_write_overflow_cq 00:08:48.634 00:08:48.634 00:08:48.634 real 0m40.240s 00:08:48.634 user 0m34.316s 00:08:48.634 sys 0m5.553s 00:08:48.634 12:50:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:48.634 12:50:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:48.634 ************************************ 00:08:48.634 END TEST nvme_doorbell_aers 00:08:48.634 ************************************ 00:08:48.634 12:50:40 nvme -- nvme/nvme.sh@97 -- # uname 00:08:48.634 12:50:40 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:48.634 12:50:40 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:48.634 12:50:40 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:48.634 12:50:40 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:48.634 12:50:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.634 ************************************ 00:08:48.634 START TEST nvme_multi_aen 00:08:48.634 ************************************ 00:08:48.634 12:50:40 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:48.634 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:48.634 [2024-08-11 12:50:40.208222] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.208589] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.208752] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.210304] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.210521] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.210726] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.212349] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.212583] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.212724] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.214192] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.214429] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 [2024-08-11 12:50:40.214567] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74504) is not found. Dropping the request. 00:08:48.634 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:48.634 Child process pid: 75025 00:08:48.892 [Child] Asynchronous Event Request test 00:08:48.893 [Child] Attached to 0000:00:10.0 00:08:48.893 [Child] Attached to 0000:00:11.0 00:08:48.893 [Child] Attached to 0000:00:13.0 00:08:48.893 [Child] Attached to 0000:00:12.0 00:08:48.893 [Child] Registering asynchronous event callbacks... 00:08:48.893 [Child] Getting orig temperature thresholds of all controllers 00:08:48.893 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.893 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.893 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.893 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.893 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:48.893 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.893 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.893 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.893 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.893 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.893 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.893 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.893 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.893 [Child] Cleaning up... 00:08:49.151 Asynchronous Event Request test 00:08:49.151 Attached to 0000:00:10.0 00:08:49.151 Attached to 0000:00:11.0 00:08:49.151 Attached to 0000:00:13.0 00:08:49.151 Attached to 0000:00:12.0 00:08:49.151 Reset controller to setup AER completions for this process 00:08:49.151 Registering asynchronous event callbacks... 00:08:49.151 Getting orig temperature thresholds of all controllers 00:08:49.151 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.151 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.151 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.151 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.151 Setting all controllers temperature threshold low to trigger AER 00:08:49.151 Waiting for all controllers temperature threshold to be set lower 00:08:49.151 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.151 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:49.151 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.151 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:49.151 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.151 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:49.151 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.151 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:49.151 Waiting for all controllers to trigger AER and reset threshold 00:08:49.151 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.151 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.151 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.151 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.151 Cleaning up... 00:08:49.151 ************************************ 00:08:49.151 END TEST nvme_multi_aen 00:08:49.151 ************************************ 00:08:49.151 00:08:49.151 real 0m0.472s 00:08:49.151 user 0m0.166s 00:08:49.151 sys 0m0.193s 00:08:49.151 12:50:40 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:49.151 12:50:40 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:49.151 12:50:40 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:49.151 12:50:40 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:49.151 12:50:40 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:49.151 12:50:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.151 ************************************ 00:08:49.151 START TEST nvme_startup 00:08:49.151 ************************************ 00:08:49.151 12:50:40 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:49.151 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:49.409 Initializing NVMe Controllers 00:08:49.409 Attached to 0000:00:10.0 00:08:49.409 Attached to 0000:00:11.0 00:08:49.409 Attached to 0000:00:13.0 00:08:49.409 Attached to 0000:00:12.0 00:08:49.409 Initialization complete. 00:08:49.409 Time used:173778.125 (us). 00:08:49.409 ************************************ 00:08:49.409 END TEST nvme_startup 00:08:49.409 ************************************ 00:08:49.409 00:08:49.409 real 0m0.248s 00:08:49.409 user 0m0.088s 00:08:49.409 sys 0m0.111s 00:08:49.409 12:50:40 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:49.409 12:50:40 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:49.409 12:50:40 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:49.409 12:50:40 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:49.409 12:50:40 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:49.409 12:50:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.409 ************************************ 00:08:49.409 START TEST nvme_multi_secondary 00:08:49.409 ************************************ 00:08:49.409 12:50:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:08:49.409 12:50:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75081 00:08:49.409 12:50:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:49.409 12:50:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75082 00:08:49.409 12:50:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:49.409 12:50:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:49.409 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:49.409 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:49.409 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:52.688 Initializing NVMe Controllers 00:08:52.688 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.688 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.688 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.688 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.688 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:52.688 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:52.688 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:52.688 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:52.688 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:52.688 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:52.688 Initialization complete. Launching workers. 00:08:52.688 ======================================================== 00:08:52.688 Latency(us) 00:08:52.688 Device Information : IOPS MiB/s Average min max 00:08:52.688 PCIE (0000:00:10.0) NSID 1 from core 1: 5122.18 20.01 3121.66 1298.43 6896.71 00:08:52.688 PCIE (0000:00:11.0) NSID 1 from core 1: 5122.18 20.01 3123.10 1360.77 6600.58 00:08:52.688 PCIE (0000:00:13.0) NSID 1 from core 1: 5122.18 20.01 3123.06 1312.75 6279.31 00:08:52.688 PCIE (0000:00:12.0) NSID 1 from core 1: 5122.18 20.01 3123.14 1347.01 6431.94 00:08:52.688 PCIE (0000:00:12.0) NSID 2 from core 1: 5122.18 20.01 3123.03 1320.83 6497.75 00:08:52.688 PCIE (0000:00:12.0) NSID 3 from core 1: 5122.18 20.01 3122.82 1244.63 7017.85 00:08:52.688 ======================================================== 00:08:52.688 Total : 30733.09 120.05 3122.80 1244.63 7017.85 00:08:52.688 00:08:52.946 Initializing NVMe Controllers 00:08:52.946 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.946 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.946 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.946 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.946 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:52.946 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:52.946 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:52.946 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:52.946 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:52.946 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:52.946 Initialization complete. Launching workers. 00:08:52.946 ======================================================== 00:08:52.946 Latency(us) 00:08:52.946 Device Information : IOPS MiB/s Average min max 00:08:52.946 PCIE (0000:00:10.0) NSID 1 from core 2: 2377.56 9.29 6726.91 1659.26 19613.87 00:08:52.946 PCIE (0000:00:11.0) NSID 1 from core 2: 2377.56 9.29 6733.74 1640.86 15828.80 00:08:52.946 PCIE (0000:00:13.0) NSID 1 from core 2: 2377.56 9.29 6737.31 1556.15 13792.89 00:08:52.946 PCIE (0000:00:12.0) NSID 1 from core 2: 2377.56 9.29 6736.41 1756.43 13800.34 00:08:52.946 PCIE (0000:00:12.0) NSID 2 from core 2: 2377.56 9.29 6735.85 1792.66 14562.49 00:08:52.946 PCIE (0000:00:12.0) NSID 3 from core 2: 2377.56 9.29 6736.03 1679.99 17055.94 00:08:52.946 ======================================================== 00:08:52.946 Total : 14265.36 55.72 6734.37 1556.15 19613.87 00:08:52.946 00:08:52.946 12:50:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75081 00:08:54.845 Initializing NVMe Controllers 00:08:54.845 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.845 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.845 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.845 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.845 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:54.845 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:54.845 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:54.845 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:54.845 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:54.845 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:54.845 Initialization complete. Launching workers. 00:08:54.845 ======================================================== 00:08:54.845 Latency(us) 00:08:54.845 Device Information : IOPS MiB/s Average min max 00:08:54.845 PCIE (0000:00:10.0) NSID 1 from core 0: 8219.27 32.11 1945.15 958.75 9704.37 00:08:54.845 PCIE (0000:00:11.0) NSID 1 from core 0: 8219.27 32.11 1946.15 988.33 9652.09 00:08:54.845 PCIE (0000:00:13.0) NSID 1 from core 0: 8219.27 32.11 1946.13 760.06 9920.32 00:08:54.845 PCIE (0000:00:12.0) NSID 1 from core 0: 8219.27 32.11 1946.10 649.26 10209.33 00:08:54.845 PCIE (0000:00:12.0) NSID 2 from core 0: 8222.47 32.12 1945.31 563.76 10520.05 00:08:54.845 PCIE (0000:00:12.0) NSID 3 from core 0: 8222.47 32.12 1945.27 431.26 10042.84 00:08:54.845 ======================================================== 00:08:54.845 Total : 49322.03 192.66 1945.68 431.26 10520.05 00:08:54.845 00:08:54.845 12:50:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75082 00:08:54.845 12:50:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75151 00:08:54.845 12:50:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:54.845 12:50:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75152 00:08:54.845 12:50:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:54.845 12:50:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:54.845 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:54.845 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:54.845 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:58.128 Initializing NVMe Controllers 00:08:58.128 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.128 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:58.128 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:58.128 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:58.128 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:58.128 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:58.128 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:58.128 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:58.128 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:58.128 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:58.128 Initialization complete. Launching workers. 00:08:58.128 ======================================================== 00:08:58.128 Latency(us) 00:08:58.128 Device Information : IOPS MiB/s Average min max 00:08:58.128 PCIE (0000:00:10.0) NSID 1 from core 0: 5354.29 20.92 2986.45 961.18 7265.71 00:08:58.128 PCIE (0000:00:11.0) NSID 1 from core 0: 5354.29 20.92 2987.69 995.99 7132.42 00:08:58.128 PCIE (0000:00:13.0) NSID 1 from core 0: 5354.29 20.92 2987.80 990.89 7086.04 00:08:58.128 PCIE (0000:00:12.0) NSID 1 from core 0: 5354.29 20.92 2987.58 974.93 7179.43 00:08:58.128 PCIE (0000:00:12.0) NSID 2 from core 0: 5354.29 20.92 2987.53 985.34 7677.21 00:08:58.128 PCIE (0000:00:12.0) NSID 3 from core 0: 5354.29 20.92 2987.63 1007.95 7256.00 00:08:58.128 ======================================================== 00:08:58.128 Total : 32125.76 125.49 2987.45 961.18 7677.21 00:08:58.128 00:08:58.128 Initializing NVMe Controllers 00:08:58.128 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.128 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:58.128 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:58.128 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:58.128 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:58.128 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:58.128 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:58.128 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:58.128 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:58.128 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:58.128 Initialization complete. Launching workers. 00:08:58.128 ======================================================== 00:08:58.128 Latency(us) 00:08:58.128 Device Information : IOPS MiB/s Average min max 00:08:58.128 PCIE (0000:00:10.0) NSID 1 from core 1: 5384.75 21.03 2969.44 1021.00 6250.73 00:08:58.128 PCIE (0000:00:11.0) NSID 1 from core 1: 5384.75 21.03 2970.67 1032.84 6120.57 00:08:58.128 PCIE (0000:00:13.0) NSID 1 from core 1: 5384.75 21.03 2970.51 1043.10 5982.70 00:08:58.128 PCIE (0000:00:12.0) NSID 1 from core 1: 5384.75 21.03 2970.35 1048.29 5956.34 00:08:58.128 PCIE (0000:00:12.0) NSID 2 from core 1: 5384.75 21.03 2970.14 1060.23 5861.05 00:08:58.128 PCIE (0000:00:12.0) NSID 3 from core 1: 5384.75 21.03 2969.94 745.09 6220.36 00:08:58.128 ======================================================== 00:08:58.128 Total : 32308.48 126.20 2970.17 745.09 6250.73 00:08:58.128 00:09:00.048 Initializing NVMe Controllers 00:09:00.048 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.048 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.048 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.048 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.048 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:00.048 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:00.048 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:00.048 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:00.048 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:00.048 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:00.048 Initialization complete. Launching workers. 00:09:00.048 ======================================================== 00:09:00.048 Latency(us) 00:09:00.048 Device Information : IOPS MiB/s Average min max 00:09:00.048 PCIE (0000:00:10.0) NSID 1 from core 2: 3801.01 14.85 4206.66 994.88 13200.67 00:09:00.048 PCIE (0000:00:11.0) NSID 1 from core 2: 3801.01 14.85 4208.74 960.16 13150.70 00:09:00.048 PCIE (0000:00:13.0) NSID 1 from core 2: 3801.01 14.85 4208.36 873.60 13297.59 00:09:00.048 PCIE (0000:00:12.0) NSID 1 from core 2: 3801.01 14.85 4208.12 754.33 14018.53 00:09:00.048 PCIE (0000:00:12.0) NSID 2 from core 2: 3801.01 14.85 4207.91 628.23 14573.07 00:09:00.048 PCIE (0000:00:12.0) NSID 3 from core 2: 3801.01 14.85 4207.48 527.26 14767.41 00:09:00.048 ======================================================== 00:09:00.048 Total : 22806.04 89.09 4207.88 527.26 14767.41 00:09:00.048 00:09:00.307 ************************************ 00:09:00.307 END TEST nvme_multi_secondary 00:09:00.307 ************************************ 00:09:00.307 12:50:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75151 00:09:00.308 12:50:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75152 00:09:00.308 00:09:00.308 real 0m10.830s 00:09:00.308 user 0m18.359s 00:09:00.308 sys 0m0.831s 00:09:00.308 12:50:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:00.308 12:50:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:00.308 12:50:51 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:00.308 12:50:51 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:00.308 12:50:51 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/74108 ]] 00:09:00.308 12:50:51 nvme -- common/autotest_common.sh@1086 -- # kill 74108 00:09:00.308 12:50:51 nvme -- common/autotest_common.sh@1087 -- # wait 74108 00:09:00.308 [2024-08-11 12:50:51.726134] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.727053] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.727092] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.727113] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.727715] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.727804] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.727821] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.727837] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.728424] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.728463] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.728496] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.728515] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.729092] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.729141] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.729160] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.729182] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75024) is not found. Dropping the request. 00:09:00.308 [2024-08-11 12:50:51.828447] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:00.308 12:50:51 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:09:00.308 12:50:51 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:09:00.308 12:50:51 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:00.308 12:50:51 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:00.308 12:50:51 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:00.308 12:50:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.308 ************************************ 00:09:00.308 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:00.308 ************************************ 00:09:00.308 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:00.567 * Looking for test storage... 00:09:00.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:00.567 12:50:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75301 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75301 00:09:00.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 75301 ']' 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:00.567 12:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.567 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:00.567 [2024-08-11 12:50:52.131434] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:00.567 [2024-08-11 12:50:52.131759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75301 ] 00:09:00.827 [2024-08-11 12:50:52.303740] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.827 [2024-08-11 12:50:52.350731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.827 [2024-08-11 12:50:52.351048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.827 [2024-08-11 12:50:52.350987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.827 [2024-08-11 12:50:52.350961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.775 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:01.775 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:09:01.775 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:01.775 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.776 nvme0n1 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_iWAKo.txt 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.776 true 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1723380653 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75335 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:01.776 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:01.777 12:50:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.685 [2024-08-11 12:50:55.208241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:03.685 [2024-08-11 12:50:55.208626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:03.685 [2024-08-11 12:50:55.208673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:03.685 [2024-08-11 12:50:55.208706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.685 [2024-08-11 12:50:55.210826] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:03.685 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75335 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75335 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75335 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:03.685 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_iWAKo.txt 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_iWAKo.txt 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75301 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 75301 ']' 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 75301 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75301 00:09:03.945 killing process with pid 75301 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75301' 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 75301 00:09:03.945 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 75301 00:09:04.204 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:04.204 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:04.204 00:09:04.204 real 0m3.799s 00:09:04.204 user 0m13.679s 00:09:04.204 sys 0m0.554s 00:09:04.204 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:04.204 ************************************ 00:09:04.204 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:04.204 ************************************ 00:09:04.205 12:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.205 12:50:55 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:04.205 12:50:55 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:04.205 12:50:55 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:04.205 12:50:55 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:04.205 12:50:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.205 ************************************ 00:09:04.205 START TEST nvme_fio 00:09:04.205 ************************************ 00:09:04.205 12:50:55 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:09:04.205 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:04.205 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:04.205 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:04.205 12:50:55 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:04.205 12:50:55 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:09:04.205 12:50:55 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:04.205 12:50:55 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:04.205 12:50:55 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:04.205 12:50:55 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:04.205 12:50:55 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:04.205 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:04.205 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:04.205 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:04.205 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:04.205 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:04.464 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:04.464 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:04.464 12:50:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:04.464 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:04.723 12:50:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:04.723 12:50:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:04.723 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:04.723 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:09:04.723 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:04.723 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:09:04.723 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.723 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:09:04.723 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:09:04.723 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:09:04.724 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.724 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:09:04.724 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:09:04.724 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:04.724 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:04.724 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:09:04.724 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:04.724 12:50:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:05.223 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:05.223 fio-3.35 00:09:05.223 Starting 1 thread 00:09:05.223 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:08.509 00:09:08.509 test: (groupid=0, jobs=1): err= 0: pid=75458: Sun Aug 11 12:50:59 2024 00:09:08.509 read: IOPS=16.5k, BW=64.3MiB/s (67.5MB/s)(129MiB/2001msec) 00:09:08.509 slat (nsec): min=4078, max=63873, avg=5801.72, stdev=2080.28 00:09:08.509 clat (usec): min=351, max=10298, avg=3864.70, stdev=577.26 00:09:08.509 lat (usec): min=357, max=10345, avg=3870.50, stdev=577.88 00:09:08.509 clat percentiles (usec): 00:09:08.509 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3490], 00:09:08.509 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:09:08.509 | 70.00th=[ 3916], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4948], 00:09:08.509 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7898], 99.95th=[ 8979], 00:09:08.509 | 99.99th=[10028] 00:09:08.509 bw ( KiB/s): min=65064, max=72216, per=100.00%, avg=68258.67, stdev=3636.48, samples=3 00:09:08.509 iops : min=16266, max=18054, avg=17064.67, stdev=909.12, samples=3 00:09:08.509 write: IOPS=16.5k, BW=64.5MiB/s (67.6MB/s)(129MiB/2001msec); 0 zone resets 00:09:08.509 slat (nsec): min=4288, max=68220, avg=5974.22, stdev=2264.29 00:09:08.509 clat (usec): min=569, max=10082, avg=3875.94, stdev=574.21 00:09:08.509 lat (usec): min=576, max=10108, avg=3881.91, stdev=574.82 00:09:08.509 clat percentiles (usec): 00:09:08.509 | 1.00th=[ 2933], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3490], 00:09:08.509 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3851], 00:09:08.509 | 70.00th=[ 3949], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4948], 00:09:08.509 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7898], 99.95th=[ 9110], 00:09:08.509 | 99.99th=[10028] 00:09:08.509 bw ( KiB/s): min=65472, max=71600, per=100.00%, avg=68072.00, stdev=3167.65, samples=3 00:09:08.509 iops : min=16368, max=17900, avg=17018.00, stdev=791.91, samples=3 00:09:08.509 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:08.509 lat (msec) : 2=0.11%, 4=73.63%, 10=26.22%, 20=0.01% 00:09:08.509 cpu : usr=99.00%, sys=0.05%, ctx=13, majf=0, minf=624 00:09:08.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:08.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.509 issued rwts: total=32956,33025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.509 00:09:08.509 Run status group 0 (all jobs): 00:09:08.509 READ: bw=64.3MiB/s (67.5MB/s), 64.3MiB/s-64.3MiB/s (67.5MB/s-67.5MB/s), io=129MiB (135MB), run=2001-2001msec 00:09:08.509 WRITE: bw=64.5MiB/s (67.6MB/s), 64.5MiB/s-64.5MiB/s (67.6MB/s-67.6MB/s), io=129MiB (135MB), run=2001-2001msec 00:09:08.509 ----------------------------------------------------- 00:09:08.509 Suppressions used: 00:09:08.509 count bytes template 00:09:08.509 1 32 /usr/src/fio/parse.c 00:09:08.509 1 8 libtcmalloc_minimal.so 00:09:08.509 ----------------------------------------------------- 00:09:08.509 00:09:08.509 12:50:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:08.509 12:50:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:08.509 12:50:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:08.509 12:50:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:08.509 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:08.509 12:51:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:08.509 12:51:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:08.509 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:08.767 12:51:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:08.767 12:51:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:08.767 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:09:08.768 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:08.768 12:51:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:09.026 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:09.026 fio-3.35 00:09:09.026 Starting 1 thread 00:09:09.026 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:12.340 00:09:12.340 test: (groupid=0, jobs=1): err= 0: pid=75519: Sun Aug 11 12:51:03 2024 00:09:12.340 read: IOPS=15.2k, BW=59.3MiB/s (62.1MB/s)(119MiB/2001msec) 00:09:12.340 slat (nsec): min=3848, max=80360, avg=6004.43, stdev=2726.06 00:09:12.340 clat (usec): min=308, max=9712, avg=4194.79, stdev=514.19 00:09:12.340 lat (usec): min=325, max=9758, avg=4200.80, stdev=514.72 00:09:12.340 clat percentiles (usec): 00:09:12.340 | 1.00th=[ 3458], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3785], 00:09:12.340 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4080], 60.00th=[ 4228], 00:09:12.340 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5080], 00:09:12.340 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 7111], 99.95th=[ 8225], 00:09:12.340 | 99.99th=[ 9503] 00:09:12.340 bw ( KiB/s): min=60320, max=61984, per=100.00%, avg=61373.33, stdev=916.07, samples=3 00:09:12.340 iops : min=15080, max=15496, avg=15343.33, stdev=229.02, samples=3 00:09:12.340 write: IOPS=15.2k, BW=59.4MiB/s (62.3MB/s)(119MiB/2001msec); 0 zone resets 00:09:12.340 slat (usec): min=3, max=158, avg= 6.14, stdev= 2.92 00:09:12.340 clat (usec): min=387, max=9564, avg=4207.32, stdev=515.21 00:09:12.340 lat (usec): min=394, max=9581, avg=4213.46, stdev=515.68 00:09:12.340 clat percentiles (usec): 00:09:12.340 | 1.00th=[ 3458], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3785], 00:09:12.340 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4080], 60.00th=[ 4228], 00:09:12.340 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4883], 95.00th=[ 5145], 00:09:12.340 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7373], 99.95th=[ 8356], 00:09:12.340 | 99.99th=[ 9372] 00:09:12.340 bw ( KiB/s): min=60720, max=61320, per=100.00%, avg=60925.33, stdev=341.88, samples=3 00:09:12.340 iops : min=15180, max=15330, avg=15231.33, stdev=85.47, samples=3 00:09:12.340 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:12.340 lat (msec) : 2=0.04%, 4=44.34%, 10=55.59% 00:09:12.340 cpu : usr=98.70%, sys=0.25%, ctx=17, majf=0, minf=624 00:09:12.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:12.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.340 issued rwts: total=30362,30418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.340 00:09:12.340 Run status group 0 (all jobs): 00:09:12.340 READ: bw=59.3MiB/s (62.1MB/s), 59.3MiB/s-59.3MiB/s (62.1MB/s-62.1MB/s), io=119MiB (124MB), run=2001-2001msec 00:09:12.340 WRITE: bw=59.4MiB/s (62.3MB/s), 59.4MiB/s-59.4MiB/s (62.3MB/s-62.3MB/s), io=119MiB (125MB), run=2001-2001msec 00:09:12.340 ----------------------------------------------------- 00:09:12.340 Suppressions used: 00:09:12.340 count bytes template 00:09:12.340 1 32 /usr/src/fio/parse.c 00:09:12.340 1 8 libtcmalloc_minimal.so 00:09:12.340 ----------------------------------------------------- 00:09:12.340 00:09:12.340 12:51:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:12.340 12:51:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:12.340 12:51:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:12.340 12:51:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:12.340 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:12.599 12:51:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:12.599 12:51:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:12.599 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:12.857 12:51:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:12.857 12:51:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:12.857 12:51:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:13.115 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:13.115 fio-3.35 00:09:13.115 Starting 1 thread 00:09:13.115 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:16.399 00:09:16.399 test: (groupid=0, jobs=1): err= 0: pid=75584: Sun Aug 11 12:51:07 2024 00:09:16.399 read: IOPS=14.6k, BW=57.2MiB/s (60.0MB/s)(114MiB/2001msec) 00:09:16.399 slat (nsec): min=4260, max=76316, avg=6434.58, stdev=3154.51 00:09:16.399 clat (usec): min=286, max=9329, avg=4346.00, stdev=702.78 00:09:16.399 lat (usec): min=293, max=9335, avg=4352.43, stdev=703.66 00:09:16.399 clat percentiles (usec): 00:09:16.399 | 1.00th=[ 3294], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3818], 00:09:16.399 | 30.00th=[ 3916], 40.00th=[ 4047], 50.00th=[ 4228], 60.00th=[ 4424], 00:09:16.399 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5342], 00:09:16.399 | 99.00th=[ 7308], 99.50th=[ 7898], 99.90th=[ 8979], 99.95th=[ 9110], 00:09:16.399 | 99.99th=[ 9241] 00:09:16.399 bw ( KiB/s): min=56960, max=61824, per=100.00%, avg=59138.67, stdev=2471.27, samples=3 00:09:16.399 iops : min=14240, max=15456, avg=14784.67, stdev=617.82, samples=3 00:09:16.399 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(115MiB/2001msec); 0 zone resets 00:09:16.399 slat (nsec): min=4260, max=63208, avg=6581.93, stdev=3163.44 00:09:16.399 clat (usec): min=352, max=9426, avg=4361.73, stdev=694.74 00:09:16.399 lat (usec): min=359, max=9432, avg=4368.31, stdev=695.63 00:09:16.399 clat percentiles (usec): 00:09:16.399 | 1.00th=[ 3326], 5.00th=[ 3556], 10.00th=[ 3687], 20.00th=[ 3818], 00:09:16.399 | 30.00th=[ 3949], 40.00th=[ 4080], 50.00th=[ 4293], 60.00th=[ 4424], 00:09:16.399 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5342], 00:09:16.400 | 99.00th=[ 7373], 99.50th=[ 7898], 99.90th=[ 8848], 99.95th=[ 9110], 00:09:16.400 | 99.99th=[ 9372] 00:09:16.400 bw ( KiB/s): min=56656, max=61376, per=100.00%, avg=58981.33, stdev=2360.76, samples=3 00:09:16.400 iops : min=14164, max=15344, avg=14745.33, stdev=590.19, samples=3 00:09:16.400 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:16.400 lat (msec) : 2=0.06%, 4=35.30%, 10=64.61% 00:09:16.400 cpu : usr=98.85%, sys=0.00%, ctx=4, majf=0, minf=625 00:09:16.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.400 issued rwts: total=29303,29349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.400 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.400 00:09:16.400 Run status group 0 (all jobs): 00:09:16.400 READ: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=114MiB (120MB), run=2001-2001msec 00:09:16.400 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=115MiB (120MB), run=2001-2001msec 00:09:16.400 ----------------------------------------------------- 00:09:16.400 Suppressions used: 00:09:16.400 count bytes template 00:09:16.400 1 32 /usr/src/fio/parse.c 00:09:16.400 1 8 libtcmalloc_minimal.so 00:09:16.400 ----------------------------------------------------- 00:09:16.400 00:09:16.400 12:51:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.400 12:51:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.400 12:51:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.400 12:51:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.400 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:16.400 12:51:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:16.400 12:51:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.658 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:16.658 12:51:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.658 12:51:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.658 12:51:08 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:16.917 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:16.917 fio-3.35 00:09:16.917 Starting 1 thread 00:09:16.917 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:20.204 00:09:20.204 test: (groupid=0, jobs=1): err= 0: pid=75639: Sun Aug 11 12:51:11 2024 00:09:20.204 read: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec) 00:09:20.204 slat (nsec): min=4143, max=67826, avg=5915.46, stdev=2587.97 00:09:20.204 clat (usec): min=282, max=10702, avg=4030.08, stdev=540.66 00:09:20.204 lat (usec): min=287, max=10770, avg=4036.00, stdev=541.20 00:09:20.204 clat percentiles (usec): 00:09:20.204 | 1.00th=[ 3195], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3621], 00:09:20.204 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 3982], 00:09:20.204 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5014], 00:09:20.204 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 7504], 99.95th=[ 9110], 00:09:20.204 | 99.99th=[10552] 00:09:20.204 bw ( KiB/s): min=54800, max=67920, per=96.76%, avg=61114.67, stdev=6573.75, samples=3 00:09:20.204 iops : min=13700, max=16980, avg=15278.67, stdev=1643.44, samples=3 00:09:20.204 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(124MiB/2001msec); 0 zone resets 00:09:20.204 slat (nsec): min=4253, max=77761, avg=6008.51, stdev=2791.05 00:09:20.204 clat (usec): min=291, max=10564, avg=4046.21, stdev=548.73 00:09:20.204 lat (usec): min=297, max=10581, avg=4052.22, stdev=549.32 00:09:20.204 clat percentiles (usec): 00:09:20.204 | 1.00th=[ 3195], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3654], 00:09:20.204 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 4015], 00:09:20.204 | 70.00th=[ 4228], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 5014], 00:09:20.204 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 8029], 99.95th=[ 9241], 00:09:20.204 | 99.99th=[10290] 00:09:20.204 bw ( KiB/s): min=55152, max=66936, per=95.92%, avg=60658.67, stdev=5929.68, samples=3 00:09:20.204 iops : min=13788, max=16734, avg=15164.67, stdev=1482.42, samples=3 00:09:20.204 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:20.204 lat (msec) : 2=0.17%, 4=60.34%, 10=39.42%, 20=0.02% 00:09:20.204 cpu : usr=98.75%, sys=0.30%, ctx=4, majf=0, minf=624 00:09:20.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:20.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.204 issued rwts: total=31597,31634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.204 00:09:20.204 Run status group 0 (all jobs): 00:09:20.204 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:09:20.204 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=124MiB (130MB), run=2001-2001msec 00:09:20.204 ----------------------------------------------------- 00:09:20.204 Suppressions used: 00:09:20.204 count bytes template 00:09:20.204 1 32 /usr/src/fio/parse.c 00:09:20.204 1 8 libtcmalloc_minimal.so 00:09:20.204 ----------------------------------------------------- 00:09:20.204 00:09:20.204 12:51:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:20.204 12:51:11 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:20.204 00:09:20.204 real 0m16.037s 00:09:20.204 user 0m13.034s 00:09:20.204 sys 0m1.385s 00:09:20.204 12:51:11 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.204 12:51:11 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:20.205 ************************************ 00:09:20.205 END TEST nvme_fio 00:09:20.205 ************************************ 00:09:20.205 ************************************ 00:09:20.205 END TEST nvme 00:09:20.205 ************************************ 00:09:20.205 00:09:20.205 real 1m25.300s 00:09:20.205 user 3m32.814s 00:09:20.205 sys 0m12.492s 00:09:20.205 12:51:11 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.205 12:51:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.464 12:51:11 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:09:20.464 12:51:11 -- spdk/autotest.sh@230 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:20.464 12:51:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:20.464 12:51:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.464 12:51:11 -- common/autotest_common.sh@10 -- # set +x 00:09:20.464 ************************************ 00:09:20.464 START TEST nvme_scc 00:09:20.464 ************************************ 00:09:20.464 12:51:11 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:20.464 * Looking for test storage... 00:09:20.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.464 12:51:11 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.464 12:51:11 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.464 12:51:11 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.464 12:51:11 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.464 12:51:11 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.464 12:51:11 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.464 12:51:11 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.464 12:51:11 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:20.464 12:51:11 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:20.464 12:51:11 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:20.464 12:51:11 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:20.464 12:51:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:20.464 12:51:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:20.464 12:51:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:20.464 12:51:11 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:20.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.982 Waiting for block devices as requested 00:09:20.982 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.241 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.241 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.241 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.514 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:26.514 12:51:17 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:26.514 12:51:17 nvme_scc -- scripts/common.sh@15 -- # local i 00:09:26.514 12:51:17 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:09:26.514 12:51:17 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:26.514 12:51:17 nvme_scc -- scripts/common.sh@24 -- # return 0 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:26.514 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:26.515 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:26.516 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.517 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:26.518 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:26.519 12:51:17 nvme_scc -- scripts/common.sh@15 -- # local i 00:09:26.519 12:51:17 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:09:26.519 12:51:17 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:26.519 12:51:17 nvme_scc -- scripts/common.sh@24 -- # return 0 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.519 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.520 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.521 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.522 12:51:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.522 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.523 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:26.524 12:51:18 nvme_scc -- scripts/common.sh@15 -- # local i 00:09:26.524 12:51:18 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:09:26.524 12:51:18 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:26.524 12:51:18 nvme_scc -- scripts/common.sh@24 -- # return 0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:26.524 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.525 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.526 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.527 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:26.789 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.790 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.791 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:26.792 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:26.793 12:51:18 nvme_scc -- scripts/common.sh@15 -- # local i 00:09:26.793 12:51:18 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:09:26.793 12:51:18 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:26.793 12:51:18 nvme_scc -- scripts/common.sh@24 -- # return 0 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.793 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:26.794 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.795 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:26.796 12:51:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:09:26.796 12:51:18 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:09:26.797 12:51:18 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:09:26.797 12:51:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:26.797 12:51:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:26.797 12:51:18 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:27.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.931 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.931 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.931 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.931 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.931 12:51:19 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:27.931 12:51:19 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:27.931 12:51:19 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:27.931 12:51:19 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:27.931 ************************************ 00:09:27.931 START TEST nvme_simple_copy 00:09:27.931 ************************************ 00:09:27.931 12:51:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:28.190 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:28.449 Initializing NVMe Controllers 00:09:28.449 Attaching to 0000:00:10.0 00:09:28.449 Controller supports SCC. Attached to 0000:00:10.0 00:09:28.449 Namespace ID: 1 size: 6GB 00:09:28.449 Initialization complete. 00:09:28.449 00:09:28.449 Controller QEMU NVMe Ctrl (12340 ) 00:09:28.449 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:28.449 Namespace Block Size:4096 00:09:28.449 Writing LBAs 0 to 63 with Random Data 00:09:28.449 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:28.449 LBAs matching Written Data: 64 00:09:28.449 00:09:28.449 real 0m0.276s 00:09:28.449 user 0m0.098s 00:09:28.449 sys 0m0.075s 00:09:28.449 12:51:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:28.449 12:51:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:28.449 ************************************ 00:09:28.449 END TEST nvme_simple_copy 00:09:28.449 ************************************ 00:09:28.449 00:09:28.449 real 0m8.009s 00:09:28.449 user 0m1.290s 00:09:28.449 sys 0m1.696s 00:09:28.449 12:51:19 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:28.449 12:51:19 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:28.449 ************************************ 00:09:28.449 END TEST nvme_scc 00:09:28.449 ************************************ 00:09:28.449 12:51:19 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:09:28.449 12:51:19 -- spdk/autotest.sh@235 -- # [[ 0 -eq 1 ]] 00:09:28.449 12:51:19 -- spdk/autotest.sh@238 -- # [[ '' -eq 1 ]] 00:09:28.449 12:51:19 -- spdk/autotest.sh@241 -- # [[ 1 -eq 1 ]] 00:09:28.449 12:51:19 -- spdk/autotest.sh@242 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:28.449 12:51:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:28.449 12:51:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:28.449 12:51:19 -- common/autotest_common.sh@10 -- # set +x 00:09:28.449 ************************************ 00:09:28.449 START TEST nvme_fdp 00:09:28.449 ************************************ 00:09:28.449 12:51:19 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:09:28.449 * Looking for test storage... 00:09:28.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.449 12:51:19 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.449 12:51:19 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.450 12:51:19 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.450 12:51:19 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.450 12:51:19 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.450 12:51:19 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.450 12:51:19 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.450 12:51:19 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.450 12:51:19 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:28.450 12:51:19 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:28.450 12:51:19 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:28.450 12:51:19 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.450 12:51:19 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:29.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.017 Waiting for block devices as requested 00:09:29.017 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.275 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.275 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.275 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.650 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:34.650 12:51:25 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:34.650 12:51:25 nvme_fdp -- scripts/common.sh@15 -- # local i 00:09:34.650 12:51:25 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:09:34.650 12:51:25 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:34.650 12:51:25 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.650 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.651 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:34.652 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.653 12:51:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.653 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:34.654 12:51:26 nvme_fdp -- scripts/common.sh@15 -- # local i 00:09:34.654 12:51:26 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:09:34.654 12:51:26 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:34.654 12:51:26 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:34.654 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:34.655 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.656 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.658 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:34.659 12:51:26 nvme_fdp -- scripts/common.sh@15 -- # local i 00:09:34.659 12:51:26 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:09:34.659 12:51:26 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:34.659 12:51:26 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:34.660 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:34.663 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.664 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.925 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.926 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.927 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:34.928 12:51:26 nvme_fdp -- scripts/common.sh@15 -- # local i 00:09:34.928 12:51:26 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:09:34.928 12:51:26 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:34.928 12:51:26 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.928 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.929 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.930 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:34.931 12:51:26 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:34.931 12:51:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:34.932 12:51:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:34.932 12:51:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:34.932 12:51:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:34.932 12:51:26 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:09:34.932 12:51:26 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:09:34.932 12:51:26 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:09:34.932 12:51:26 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:09:34.932 12:51:26 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:09:34.932 12:51:26 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:34.932 12:51:26 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:34.932 12:51:26 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:35.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.064 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.064 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.064 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.064 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.322 12:51:27 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:36.322 12:51:27 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:36.322 12:51:27 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.322 12:51:27 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:36.322 ************************************ 00:09:36.322 START TEST nvme_flexible_data_placement 00:09:36.322 ************************************ 00:09:36.322 12:51:27 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:36.322 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:36.582 Initializing NVMe Controllers 00:09:36.582 Attaching to 0000:00:13.0 00:09:36.582 Controller supports FDP Attached to 0000:00:13.0 00:09:36.582 Namespace ID: 1 Endurance Group ID: 1 00:09:36.582 Initialization complete. 00:09:36.582 00:09:36.582 ================================== 00:09:36.582 == FDP tests for Namespace: #01 == 00:09:36.582 ================================== 00:09:36.582 00:09:36.582 Get Feature: FDP: 00:09:36.583 ================= 00:09:36.583 Enabled: Yes 00:09:36.583 FDP configuration Index: 0 00:09:36.583 00:09:36.583 FDP configurations log page 00:09:36.583 =========================== 00:09:36.583 Number of FDP configurations: 1 00:09:36.583 Version: 0 00:09:36.583 Size: 112 00:09:36.583 FDP Configuration Descriptor: 0 00:09:36.583 Descriptor Size: 96 00:09:36.583 Reclaim Group Identifier format: 2 00:09:36.583 FDP Volatile Write Cache: Not Present 00:09:36.583 FDP Configuration: Valid 00:09:36.583 Vendor Specific Size: 0 00:09:36.583 Number of Reclaim Groups: 2 00:09:36.583 Number of Recalim Unit Handles: 8 00:09:36.583 Max Placement Identifiers: 128 00:09:36.583 Number of Namespaces Suppprted: 256 00:09:36.583 Reclaim unit Nominal Size: 6000000 bytes 00:09:36.583 Estimated Reclaim Unit Time Limit: Not Reported 00:09:36.583 RUH Desc #000: RUH Type: Initially Isolated 00:09:36.583 RUH Desc #001: RUH Type: Initially Isolated 00:09:36.583 RUH Desc #002: RUH Type: Initially Isolated 00:09:36.583 RUH Desc #003: RUH Type: Initially Isolated 00:09:36.583 RUH Desc #004: RUH Type: Initially Isolated 00:09:36.583 RUH Desc #005: RUH Type: Initially Isolated 00:09:36.583 RUH Desc #006: RUH Type: Initially Isolated 00:09:36.583 RUH Desc #007: RUH Type: Initially Isolated 00:09:36.583 00:09:36.583 FDP reclaim unit handle usage log page 00:09:36.583 ====================================== 00:09:36.583 Number of Reclaim Unit Handles: 8 00:09:36.583 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:36.583 RUH Usage Desc #001: RUH Attributes: Unused 00:09:36.583 RUH Usage Desc #002: RUH Attributes: Unused 00:09:36.583 RUH Usage Desc #003: RUH Attributes: Unused 00:09:36.583 RUH Usage Desc #004: RUH Attributes: Unused 00:09:36.583 RUH Usage Desc #005: RUH Attributes: Unused 00:09:36.583 RUH Usage Desc #006: RUH Attributes: Unused 00:09:36.583 RUH Usage Desc #007: RUH Attributes: Unused 00:09:36.583 00:09:36.583 FDP statistics log page 00:09:36.583 ======================= 00:09:36.583 Host bytes with metadata written: 1712574464 00:09:36.583 Media bytes with metadata written: 1713471488 00:09:36.583 Media bytes erased: 0 00:09:36.583 00:09:36.583 FDP Reclaim unit handle status 00:09:36.583 ============================== 00:09:36.583 Number of RUHS descriptors: 2 00:09:36.583 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005ec3 00:09:36.583 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:36.583 00:09:36.583 FDP write on placement id: 0 success 00:09:36.583 00:09:36.583 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:36.583 00:09:36.583 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:36.583 00:09:36.583 Get Feature: FDP Events for Placement handle: #0 00:09:36.583 ======================== 00:09:36.583 Number of FDP Events: 6 00:09:36.583 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:36.583 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:36.583 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:36.583 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:36.583 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:36.583 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:36.583 00:09:36.583 FDP events log page 00:09:36.583 =================== 00:09:36.583 Number of FDP events: 1 00:09:36.583 FDP Event #0: 00:09:36.583 Event Type: RU Not Written to Capacity 00:09:36.583 Placement Identifier: Valid 00:09:36.583 NSID: Valid 00:09:36.583 Location: Valid 00:09:36.583 Placement Identifier: 0 00:09:36.583 Event Timestamp: 4 00:09:36.583 Namespace Identifier: 1 00:09:36.583 Reclaim Group Identifier: 0 00:09:36.583 Reclaim Unit Handle Identifier: 0 00:09:36.583 00:09:36.583 FDP test passed 00:09:36.583 00:09:36.583 real 0m0.238s 00:09:36.583 user 0m0.071s 00:09:36.583 sys 0m0.065s 00:09:36.583 12:51:27 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.583 12:51:27 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:36.583 ************************************ 00:09:36.583 END TEST nvme_flexible_data_placement 00:09:36.583 ************************************ 00:09:36.583 00:09:36.583 real 0m8.095s 00:09:36.583 user 0m1.336s 00:09:36.583 sys 0m1.727s 00:09:36.601 12:51:27 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.601 ************************************ 00:09:36.601 END TEST nvme_fdp 00:09:36.601 ************************************ 00:09:36.601 12:51:27 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:36.601 12:51:28 -- spdk/autotest.sh@245 -- # [[ '' -eq 1 ]] 00:09:36.601 12:51:28 -- spdk/autotest.sh@249 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:36.601 12:51:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.601 12:51:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.601 12:51:28 -- common/autotest_common.sh@10 -- # set +x 00:09:36.601 ************************************ 00:09:36.601 START TEST nvme_rpc 00:09:36.601 ************************************ 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:36.601 * Looking for test storage... 00:09:36.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.601 12:51:28 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.601 12:51:28 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:36.601 12:51:28 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:09:36.861 12:51:28 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:36.861 12:51:28 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=76980 00:09:36.861 12:51:28 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:36.861 12:51:28 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:36.861 12:51:28 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 76980 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 76980 ']' 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:36.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:36.861 12:51:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.861 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:36.861 [2024-08-11 12:51:28.321555] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:36.861 [2024-08-11 12:51:28.321796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76980 ] 00:09:37.121 [2024-08-11 12:51:28.473324] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.121 [2024-08-11 12:51:28.520336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.121 [2024-08-11 12:51:28.520398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.058 12:51:29 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.058 12:51:29 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:09:38.058 12:51:29 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:38.058 Nvme0n1 00:09:38.058 12:51:29 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:38.058 12:51:29 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:38.317 request: 00:09:38.317 { 00:09:38.317 "bdev_name": "Nvme0n1", 00:09:38.317 "filename": "non_existing_file", 00:09:38.317 "method": "bdev_nvme_apply_firmware", 00:09:38.317 "req_id": 1 00:09:38.317 } 00:09:38.317 Got JSON-RPC error response 00:09:38.317 response: 00:09:38.317 { 00:09:38.317 "code": -32603, 00:09:38.317 "message": "open file failed." 00:09:38.317 } 00:09:38.317 12:51:29 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:38.317 12:51:29 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:38.317 12:51:29 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:38.577 12:51:30 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:38.577 12:51:30 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 76980 00:09:38.577 12:51:30 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 76980 ']' 00:09:38.577 12:51:30 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 76980 00:09:38.577 12:51:30 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:09:38.577 12:51:30 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:38.577 12:51:30 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76980 00:09:38.836 12:51:30 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:38.836 killing process with pid 76980 00:09:38.836 12:51:30 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:38.836 12:51:30 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76980' 00:09:38.836 12:51:30 nvme_rpc -- common/autotest_common.sh@965 -- # kill 76980 00:09:38.836 12:51:30 nvme_rpc -- common/autotest_common.sh@970 -- # wait 76980 00:09:39.096 00:09:39.096 real 0m2.445s 00:09:39.096 user 0m5.069s 00:09:39.096 sys 0m0.539s 00:09:39.096 12:51:30 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:39.096 12:51:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 ************************************ 00:09:39.096 END TEST nvme_rpc 00:09:39.096 ************************************ 00:09:39.096 12:51:30 -- spdk/autotest.sh@250 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:39.096 12:51:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:39.096 12:51:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:39.096 12:51:30 -- common/autotest_common.sh@10 -- # set +x 00:09:39.096 ************************************ 00:09:39.096 START TEST nvme_rpc_timeouts 00:09:39.096 ************************************ 00:09:39.096 12:51:30 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:39.096 * Looking for test storage... 00:09:39.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.096 12:51:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.096 12:51:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77034 00:09:39.096 12:51:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77034 00:09:39.096 12:51:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77058 00:09:39.096 12:51:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:39.096 12:51:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:39.096 12:51:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77058 00:09:39.096 12:51:30 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 77058 ']' 00:09:39.096 12:51:30 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.096 12:51:30 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:39.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.096 12:51:30 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.096 12:51:30 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:39.096 12:51:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:39.355 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:39.355 [2024-08-11 12:51:30.722561] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:39.355 [2024-08-11 12:51:30.722704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77058 ] 00:09:39.355 [2024-08-11 12:51:30.863828] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.355 [2024-08-11 12:51:30.901787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.355 [2024-08-11 12:51:30.901838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.307 12:51:31 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:40.307 12:51:31 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:09:40.307 Checking default timeout settings: 00:09:40.307 12:51:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:40.307 12:51:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:40.566 Making settings changes with rpc: 00:09:40.567 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:40.567 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:40.826 Check default vs. modified settings: 00:09:40.826 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:40.826 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77034 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77034 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:41.394 Setting action_on_timeout is changed as expected. 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77034 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77034 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:41.394 Setting timeout_us is changed as expected. 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77034 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77034 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:41.394 Setting timeout_admin_us is changed as expected. 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77034 /tmp/settings_modified_77034 00:09:41.394 12:51:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77058 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 77058 ']' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 77058 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77058 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77058' 00:09:41.394 killing process with pid 77058 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 77058 00:09:41.394 12:51:32 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 77058 00:09:41.654 RPC TIMEOUT SETTING TEST PASSED. 00:09:41.654 12:51:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:41.654 00:09:41.654 real 0m2.570s 00:09:41.654 user 0m5.524s 00:09:41.654 sys 0m0.495s 00:09:41.654 12:51:33 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.654 ************************************ 00:09:41.654 12:51:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:41.654 END TEST nvme_rpc_timeouts 00:09:41.654 ************************************ 00:09:41.654 12:51:33 -- spdk/autotest.sh@252 -- # uname -s 00:09:41.654 12:51:33 -- spdk/autotest.sh@252 -- # '[' Linux = Linux ']' 00:09:41.654 12:51:33 -- spdk/autotest.sh@253 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:41.654 12:51:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:41.654 12:51:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.654 12:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:41.654 ************************************ 00:09:41.654 START TEST sw_hotplug 00:09:41.654 ************************************ 00:09:41.654 12:51:33 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:41.913 * Looking for test storage... 00:09:41.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.913 12:51:33 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:42.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.430 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:42.430 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:42.430 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:42.430 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:42.430 12:51:33 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:42.430 12:51:33 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:42.430 12:51:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:42.430 12:51:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@230 -- # local class 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@15 -- # local i 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@15 -- # local i 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@15 -- # local i 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@15 -- # local i 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:09:42.430 12:51:33 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.430 12:51:33 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:42.430 12:51:33 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:42.430 12:51:33 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:42.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.948 Waiting for block devices as requested 00:09:42.948 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.948 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.207 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.207 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.481 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:48.481 12:51:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:48.481 12:51:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:48.739 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:48.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.740 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:48.998 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:49.257 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.257 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.515 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:49.515 12:51:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:49.515 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:49.515 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:49.515 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=77901 00:09:49.515 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:49.515 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:49.515 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:49.516 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:49.516 12:51:40 sw_hotplug -- common/autotest_common.sh@703 -- # local cmd_es=0 00:09:49.516 12:51:40 sw_hotplug -- common/autotest_common.sh@705 -- # [[ -t 0 ]] 00:09:49.516 12:51:40 sw_hotplug -- common/autotest_common.sh@705 -- # exec 00:09:49.516 12:51:40 sw_hotplug -- common/autotest_common.sh@707 -- # local time=0 TIMEFORMAT=%2R 00:09:49.516 12:51:40 sw_hotplug -- common/autotest_common.sh@713 -- # remove_attach_helper 3 6 false 00:09:49.516 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:49.516 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:49.516 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:49.516 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:49.516 12:51:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:49.516 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:49.774 Initializing NVMe Controllers 00:09:49.774 Attaching to 0000:00:10.0 00:09:49.774 Attaching to 0000:00:11.0 00:09:49.774 Attached to 0000:00:10.0 00:09:49.774 Attached to 0000:00:11.0 00:09:49.774 Initialization complete. Starting I/O... 00:09:49.774 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:49.774 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:49.774 00:09:50.711 QEMU NVMe Ctrl (12340 ): 1224 I/Os completed (+1224) 00:09:50.711 QEMU NVMe Ctrl (12341 ): 1326 I/Os completed (+1326) 00:09:50.711 00:09:51.677 QEMU NVMe Ctrl (12340 ): 2784 I/Os completed (+1560) 00:09:51.677 QEMU NVMe Ctrl (12341 ): 3048 I/Os completed (+1722) 00:09:51.677 00:09:52.613 QEMU NVMe Ctrl (12340 ): 4471 I/Os completed (+1687) 00:09:52.613 QEMU NVMe Ctrl (12341 ): 4983 I/Os completed (+1935) 00:09:52.613 00:09:53.991 QEMU NVMe Ctrl (12340 ): 6487 I/Os completed (+2016) 00:09:53.991 QEMU NVMe Ctrl (12341 ): 7079 I/Os completed (+2096) 00:09:53.991 00:09:54.929 QEMU NVMe Ctrl (12340 ): 8263 I/Os completed (+1776) 00:09:54.929 QEMU NVMe Ctrl (12341 ): 9081 I/Os completed (+2002) 00:09:54.929 00:09:55.498 12:51:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:55.498 12:51:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:55.498 12:51:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:55.498 [2024-08-11 12:51:46.995395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:55.498 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:55.498 [2024-08-11 12:51:46.997159] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:46.997215] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:46.997238] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:46.997259] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:55.498 [2024-08-11 12:51:46.999414] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:46.999473] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:46.999495] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:46.999516] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:55.498 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:55.498 [2024-08-11 12:51:47.026821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:55.498 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:55.498 [2024-08-11 12:51:47.028384] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:47.028445] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:47.028471] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:47.028491] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:55.498 [2024-08-11 12:51:47.030129] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:47.030168] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:47.030194] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 [2024-08-11 12:51:47.030213] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.498 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:55.498 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:55.757 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.757 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.758 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:55.758 00:09:55.758 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:55.758 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.758 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.758 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.758 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:55.758 Attaching to 0000:00:10.0 00:09:55.758 Attached to 0000:00:10.0 00:09:55.758 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:56.017 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:56.017 12:51:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:56.017 Attaching to 0000:00:11.0 00:09:56.017 Attached to 0000:00:11.0 00:09:56.953 QEMU NVMe Ctrl (12340 ): 2017 I/Os completed (+2017) 00:09:56.953 QEMU NVMe Ctrl (12341 ): 1814 I/Os completed (+1814) 00:09:56.953 00:09:57.889 QEMU NVMe Ctrl (12340 ): 4201 I/Os completed (+2184) 00:09:57.889 QEMU NVMe Ctrl (12341 ): 4003 I/Os completed (+2189) 00:09:57.889 00:09:58.824 QEMU NVMe Ctrl (12340 ): 6337 I/Os completed (+2136) 00:09:58.824 QEMU NVMe Ctrl (12341 ): 6176 I/Os completed (+2173) 00:09:58.824 00:09:59.759 QEMU NVMe Ctrl (12340 ): 8525 I/Os completed (+2188) 00:09:59.759 QEMU NVMe Ctrl (12341 ): 8368 I/Os completed (+2192) 00:09:59.759 00:10:00.694 QEMU NVMe Ctrl (12340 ): 10726 I/Os completed (+2201) 00:10:00.694 QEMU NVMe Ctrl (12341 ): 10572 I/Os completed (+2204) 00:10:00.694 00:10:01.629 QEMU NVMe Ctrl (12340 ): 12914 I/Os completed (+2188) 00:10:01.629 QEMU NVMe Ctrl (12341 ): 12763 I/Os completed (+2191) 00:10:01.629 00:10:02.600 QEMU NVMe Ctrl (12340 ): 15118 I/Os completed (+2204) 00:10:02.600 QEMU NVMe Ctrl (12341 ): 14974 I/Os completed (+2211) 00:10:02.600 00:10:03.977 QEMU NVMe Ctrl (12340 ): 17266 I/Os completed (+2148) 00:10:03.977 QEMU NVMe Ctrl (12341 ): 17137 I/Os completed (+2163) 00:10:03.977 00:10:04.914 QEMU NVMe Ctrl (12340 ): 19444 I/Os completed (+2178) 00:10:04.914 QEMU NVMe Ctrl (12341 ): 19321 I/Os completed (+2184) 00:10:04.914 00:10:05.852 QEMU NVMe Ctrl (12340 ): 21660 I/Os completed (+2216) 00:10:05.852 QEMU NVMe Ctrl (12341 ): 21538 I/Os completed (+2217) 00:10:05.852 00:10:06.792 QEMU NVMe Ctrl (12340 ): 23644 I/Os completed (+1984) 00:10:06.792 QEMU NVMe Ctrl (12341 ): 23600 I/Os completed (+2062) 00:10:06.792 00:10:07.730 QEMU NVMe Ctrl (12340 ): 25720 I/Os completed (+2076) 00:10:07.730 QEMU NVMe Ctrl (12341 ): 25695 I/Os completed (+2095) 00:10:07.730 00:10:07.990 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:07.990 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:07.990 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:07.990 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:07.990 [2024-08-11 12:51:59.365744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:07.990 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:07.990 [2024-08-11 12:51:59.367776] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.990 [2024-08-11 12:51:59.367838] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.990 [2024-08-11 12:51:59.367882] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.990 [2024-08-11 12:51:59.367931] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.990 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:07.990 [2024-08-11 12:51:59.370418] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.990 [2024-08-11 12:51:59.370480] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.990 [2024-08-11 12:51:59.370507] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.990 [2024-08-11 12:51:59.370531] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.990 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:07.990 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:07.991 [2024-08-11 12:51:59.402424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:07.991 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:07.991 [2024-08-11 12:51:59.404331] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.991 [2024-08-11 12:51:59.404399] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.991 [2024-08-11 12:51:59.404429] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.991 [2024-08-11 12:51:59.404452] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.991 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:07.991 [2024-08-11 12:51:59.406408] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.991 [2024-08-11 12:51:59.406463] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.991 [2024-08-11 12:51:59.406494] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.991 [2024-08-11 12:51:59.406516] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.991 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:07.991 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:07.991 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:07.991 EAL: Scan for (pci) bus failed. 00:10:07.991 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:07.991 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:07.991 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:08.251 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:08.251 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:08.251 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:08.251 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:08.251 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:08.251 Attaching to 0000:00:10.0 00:10:08.251 Attached to 0000:00:10.0 00:10:08.251 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:08.251 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:08.251 12:51:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:08.251 Attaching to 0000:00:11.0 00:10:08.251 Attached to 0000:00:11.0 00:10:08.819 QEMU NVMe Ctrl (12340 ): 1200 I/Os completed (+1200) 00:10:08.819 QEMU NVMe Ctrl (12341 ): 993 I/Os completed (+993) 00:10:08.819 00:10:09.758 QEMU NVMe Ctrl (12340 ): 3176 I/Os completed (+1976) 00:10:09.758 QEMU NVMe Ctrl (12341 ): 3031 I/Os completed (+2038) 00:10:09.758 00:10:10.704 QEMU NVMe Ctrl (12340 ): 5164 I/Os completed (+1988) 00:10:10.704 QEMU NVMe Ctrl (12341 ): 5078 I/Os completed (+2047) 00:10:10.704 00:10:11.642 QEMU NVMe Ctrl (12340 ): 7276 I/Os completed (+2112) 00:10:11.642 QEMU NVMe Ctrl (12341 ): 7205 I/Os completed (+2127) 00:10:11.642 00:10:13.019 QEMU NVMe Ctrl (12340 ): 9384 I/Os completed (+2108) 00:10:13.019 QEMU NVMe Ctrl (12341 ): 9328 I/Os completed (+2123) 00:10:13.019 00:10:13.587 QEMU NVMe Ctrl (12340 ): 11417 I/Os completed (+2033) 00:10:13.587 QEMU NVMe Ctrl (12341 ): 11407 I/Os completed (+2079) 00:10:13.587 00:10:14.965 QEMU NVMe Ctrl (12340 ): 13549 I/Os completed (+2132) 00:10:14.965 QEMU NVMe Ctrl (12341 ): 13555 I/Os completed (+2148) 00:10:14.965 00:10:15.902 QEMU NVMe Ctrl (12340 ): 15693 I/Os completed (+2144) 00:10:15.902 QEMU NVMe Ctrl (12341 ): 15719 I/Os completed (+2164) 00:10:15.902 00:10:16.839 QEMU NVMe Ctrl (12340 ): 17825 I/Os completed (+2132) 00:10:16.839 QEMU NVMe Ctrl (12341 ): 17872 I/Os completed (+2153) 00:10:16.839 00:10:17.775 QEMU NVMe Ctrl (12340 ): 19898 I/Os completed (+2073) 00:10:17.775 QEMU NVMe Ctrl (12341 ): 19991 I/Os completed (+2119) 00:10:17.775 00:10:18.711 QEMU NVMe Ctrl (12340 ): 22000 I/Os completed (+2102) 00:10:18.711 QEMU NVMe Ctrl (12341 ): 22129 I/Os completed (+2138) 00:10:18.711 00:10:19.647 QEMU NVMe Ctrl (12340 ): 24089 I/Os completed (+2089) 00:10:19.647 QEMU NVMe Ctrl (12341 ): 24314 I/Os completed (+2185) 00:10:19.647 00:10:20.213 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:20.213 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:20.213 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.213 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.213 [2024-08-11 12:52:11.721104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:20.213 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:20.213 [2024-08-11 12:52:11.724831] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.724935] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.724962] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.724987] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:20.213 [2024-08-11 12:52:11.726965] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.727016] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.727040] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.727060] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.213 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.213 [2024-08-11 12:52:11.744006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:20.213 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:20.213 [2024-08-11 12:52:11.745535] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.745613] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.745637] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.745656] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:20.213 [2024-08-11 12:52:11.747288] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.747343] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.747397] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 [2024-08-11 12:52:11.747414] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.213 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:20.213 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:20.213 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:20.213 EAL: Scan for (pci) bus failed. 00:10:20.472 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.472 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.472 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:20.472 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:20.472 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.472 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.472 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.472 12:52:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:20.472 Attaching to 0000:00:10.0 00:10:20.472 Attached to 0000:00:10.0 00:10:20.472 12:52:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:20.472 12:52:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.472 12:52:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:20.472 Attaching to 0000:00:11.0 00:10:20.472 Attached to 0000:00:11.0 00:10:20.472 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:20.472 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:20.472 [2024-08-11 12:52:12.046413] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:32.677 12:52:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:32.677 12:52:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:32.677 12:52:24 sw_hotplug -- common/autotest_common.sh@713 -- # time=43.05 00:10:32.677 12:52:24 sw_hotplug -- common/autotest_common.sh@714 -- # echo 43.05 00:10:32.677 12:52:24 sw_hotplug -- common/autotest_common.sh@716 -- # return 0 00:10:32.677 12:52:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.05 00:10:32.677 12:52:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.05 2 00:10:32.677 remove_attach_helper took 43.05s to complete (handling 2 nvme drive(s)) 12:52:24 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 77901 00:10:39.237 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (77901) - No such process 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 77901 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78446 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78446 00:10:39.237 12:52:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:39.237 12:52:30 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 78446 ']' 00:10:39.237 12:52:30 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.237 12:52:30 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:39.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.237 12:52:30 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.237 12:52:30 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:39.237 12:52:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.237 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:10:39.237 [2024-08-11 12:52:30.165561] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:10:39.237 [2024-08-11 12:52:30.166491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78446 ] 00:10:39.237 [2024-08-11 12:52:30.313671] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.237 [2024-08-11 12:52:30.357599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@703 -- # local cmd_es=0 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@705 -- # [[ -t 0 ]] 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@705 -- # exec 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@707 -- # local time=0 TIMEFORMAT=%2R 00:10:39.497 12:52:31 sw_hotplug -- common/autotest_common.sh@713 -- # remove_attach_helper 3 6 true 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:39.497 12:52:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:46.061 12:52:37 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:46.061 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:46.061 12:52:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.061 [2024-08-11 12:52:37.174139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:46.061 12:52:37 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:46.061 [2024-08-11 12:52:37.176454] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.061 [2024-08-11 12:52:37.176549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.061 [2024-08-11 12:52:37.176570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.061 [2024-08-11 12:52:37.176593] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.062 [2024-08-11 12:52:37.176612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.062 [2024-08-11 12:52:37.176642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.062 [2024-08-11 12:52:37.176654] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.062 [2024-08-11 12:52:37.176670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.062 [2024-08-11 12:52:37.176682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.062 [2024-08-11 12:52:37.176695] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.062 [2024-08-11 12:52:37.176706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.062 [2024-08-11 12:52:37.176720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.062 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:46.062 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:46.062 [2024-08-11 12:52:37.574166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:46.062 [2024-08-11 12:52:37.576402] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.062 [2024-08-11 12:52:37.576596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.062 [2024-08-11 12:52:37.576631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.062 [2024-08-11 12:52:37.576651] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.062 [2024-08-11 12:52:37.576666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.062 [2024-08-11 12:52:37.576678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.062 [2024-08-11 12:52:37.576704] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.062 [2024-08-11 12:52:37.576731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.062 [2024-08-11 12:52:37.576746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.062 [2024-08-11 12:52:37.576759] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.062 [2024-08-11 12:52:37.576774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.062 [2024-08-11 12:52:37.576786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:46.321 12:52:37 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:46.321 12:52:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.321 12:52:37 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.321 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:46.580 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:46.580 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.580 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.580 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.580 12:52:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:46.580 12:52:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:46.580 12:52:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.580 12:52:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.811 12:52:50 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:58.811 12:52:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.811 12:52:50 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.811 12:52:50 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:58.811 12:52:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.811 [2024-08-11 12:52:50.174362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:58.811 [2024-08-11 12:52:50.176712] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.811 [2024-08-11 12:52:50.176903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.811 [2024-08-11 12:52:50.177118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.811 [2024-08-11 12:52:50.177299] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.811 [2024-08-11 12:52:50.177401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.811 [2024-08-11 12:52:50.177524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.811 [2024-08-11 12:52:50.177680] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.811 [2024-08-11 12:52:50.177730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.811 [2024-08-11 12:52:50.177959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.811 [2024-08-11 12:52:50.178035] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.811 [2024-08-11 12:52:50.178173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.811 [2024-08-11 12:52:50.178244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.811 12:52:50 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:58.811 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:59.070 [2024-08-11 12:52:50.574360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:59.070 [2024-08-11 12:52:50.576944] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.070 [2024-08-11 12:52:50.577164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.070 [2024-08-11 12:52:50.577342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.070 [2024-08-11 12:52:50.577635] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.070 [2024-08-11 12:52:50.577693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.070 [2024-08-11 12:52:50.577857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.070 [2024-08-11 12:52:50.578011] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.070 [2024-08-11 12:52:50.578096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.070 [2024-08-11 12:52:50.578354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.070 [2024-08-11 12:52:50.578514] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.070 [2024-08-11 12:52:50.578566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.070 [2024-08-11 12:52:50.578764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.329 12:52:50 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:59.329 12:52:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.329 12:52:50 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:59.329 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:59.587 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:59.587 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:59.587 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:59.587 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:59.587 12:52:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:59.587 12:52:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:59.587 12:52:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:59.587 12:52:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.785 12:53:03 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:11.785 12:53:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.785 12:53:03 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.785 12:53:03 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:11.785 12:53:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.785 [2024-08-11 12:53:03.174507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:11.785 [2024-08-11 12:53:03.177048] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.785 [2024-08-11 12:53:03.177096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.785 [2024-08-11 12:53:03.177116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.785 [2024-08-11 12:53:03.177140] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.785 [2024-08-11 12:53:03.177154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.785 [2024-08-11 12:53:03.177169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.785 [2024-08-11 12:53:03.177181] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.785 [2024-08-11 12:53:03.177195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.785 [2024-08-11 12:53:03.177206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.785 [2024-08-11 12:53:03.177236] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.785 [2024-08-11 12:53:03.177263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.785 [2024-08-11 12:53:03.177276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.785 12:53:03 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:11.785 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:12.044 [2024-08-11 12:53:03.574515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:12.044 [2024-08-11 12:53:03.577085] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.044 [2024-08-11 12:53:03.577149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.044 [2024-08-11 12:53:03.577171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.044 [2024-08-11 12:53:03.577191] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.044 [2024-08-11 12:53:03.577205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.044 [2024-08-11 12:53:03.577217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.044 [2024-08-11 12:53:03.577235] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.044 [2024-08-11 12:53:03.577247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.044 [2024-08-11 12:53:03.577260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.044 [2024-08-11 12:53:03.577271] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.044 [2024-08-11 12:53:03.577284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.044 [2024-08-11 12:53:03.577309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.302 12:53:03 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:12.302 12:53:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.302 12:53:03 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.302 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:12.560 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:12.560 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.560 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.560 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.560 12:53:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:12.560 12:53:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:12.560 12:53:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.560 12:53:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@713 -- # time=45.03 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@714 -- # echo 45.03 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@716 -- # return 0 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.03 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.03 2 00:11:24.766 remove_attach_helper took 45.03s to complete (handling 2 nvme drive(s)) 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@703 -- # local cmd_es=0 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@705 -- # [[ -t 0 ]] 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@705 -- # exec 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@707 -- # local time=0 TIMEFORMAT=%2R 00:11:24.766 12:53:16 sw_hotplug -- common/autotest_common.sh@713 -- # remove_attach_helper 3 6 true 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:24.766 12:53:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.448 12:53:22 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:31.448 12:53:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.448 [2024-08-11 12:53:22.237670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:31.448 [2024-08-11 12:53:22.239461] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.448 [2024-08-11 12:53:22.239647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.448 12:53:22 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:31.448 [2024-08-11 12:53:22.239824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.448 [2024-08-11 12:53:22.239903] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.448 [2024-08-11 12:53:22.239925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.448 [2024-08-11 12:53:22.239943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.448 [2024-08-11 12:53:22.239958] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.448 [2024-08-11 12:53:22.239973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.448 [2024-08-11 12:53:22.239986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.448 [2024-08-11 12:53:22.240004] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.448 [2024-08-11 12:53:22.240017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.448 [2024-08-11 12:53:22.240032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.448 [2024-08-11 12:53:22.637691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:31.448 [2024-08-11 12:53:22.639587] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.448 [2024-08-11 12:53:22.639645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.448 [2024-08-11 12:53:22.639667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.448 [2024-08-11 12:53:22.639686] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.448 [2024-08-11 12:53:22.639700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.448 [2024-08-11 12:53:22.639712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.448 [2024-08-11 12:53:22.639726] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.448 [2024-08-11 12:53:22.639737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.448 [2024-08-11 12:53:22.639750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.448 [2024-08-11 12:53:22.639761] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.448 [2024-08-11 12:53:22.639774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.448 [2024-08-11 12:53:22.639785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.448 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.449 12:53:22 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:31.449 12:53:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.449 12:53:22 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.449 12:53:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.708 12:53:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.708 12:53:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.708 12:53:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.918 12:53:35 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:43.918 12:53:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.918 12:53:35 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.918 12:53:35 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:43.918 12:53:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.918 12:53:35 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:43.918 [2024-08-11 12:53:35.237842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:43.918 [2024-08-11 12:53:35.239639] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.918 [2024-08-11 12:53:35.239706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.918 [2024-08-11 12:53:35.239727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.918 [2024-08-11 12:53:35.239748] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.918 [2024-08-11 12:53:35.239761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.918 [2024-08-11 12:53:35.239775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.918 [2024-08-11 12:53:35.239786] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.918 [2024-08-11 12:53:35.239800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.918 [2024-08-11 12:53:35.239811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.918 [2024-08-11 12:53:35.239824] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.918 [2024-08-11 12:53:35.239910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.918 [2024-08-11 12:53:35.239937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:43.918 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.177 [2024-08-11 12:53:35.637819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:44.177 [2024-08-11 12:53:35.639478] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.177 [2024-08-11 12:53:35.639520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.177 [2024-08-11 12:53:35.639545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-08-11 12:53:35.639562] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.177 [2024-08-11 12:53:35.639575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.177 [2024-08-11 12:53:35.639586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-08-11 12:53:35.639599] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.177 [2024-08-11 12:53:35.639610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.177 [2024-08-11 12:53:35.639622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 [2024-08-11 12:53:35.639632] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.177 [2024-08-11 12:53:35.639644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.177 [2024-08-11 12:53:35.639654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.177 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:44.177 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.177 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.177 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.177 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.177 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.177 12:53:35 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:44.177 12:53:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.177 12:53:35 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:44.435 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:44.435 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:44.435 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.435 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.435 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:44.435 12:53:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:44.435 12:53:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.435 12:53:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.435 12:53:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.435 12:53:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:44.693 12:53:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:44.693 12:53:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.693 12:53:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.913 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:56.913 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:56.913 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:56.913 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.913 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.913 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.913 12:53:48 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:56.913 12:53:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.913 12:53:48 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.914 12:53:48 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:56.914 12:53:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.914 [2024-08-11 12:53:48.237996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:56.914 [2024-08-11 12:53:48.239897] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.914 [2024-08-11 12:53:48.240066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.914 [2024-08-11 12:53:48.240320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.914 [2024-08-11 12:53:48.240474] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.914 [2024-08-11 12:53:48.240603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.914 [2024-08-11 12:53:48.240755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.914 [2024-08-11 12:53:48.240828] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.914 [2024-08-11 12:53:48.240982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.914 [2024-08-11 12:53:48.241133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.914 [2024-08-11 12:53:48.241387] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.914 [2024-08-11 12:53:48.241590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.914 [2024-08-11 12:53:48.241665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.914 12:53:48 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:56.914 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:57.172 [2024-08-11 12:53:48.638001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:57.172 [2024-08-11 12:53:48.639749] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.173 [2024-08-11 12:53:48.639992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.173 [2024-08-11 12:53:48.640155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.173 [2024-08-11 12:53:48.640424] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.173 [2024-08-11 12:53:48.640481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.173 [2024-08-11 12:53:48.640618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.173 [2024-08-11 12:53:48.640685] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.173 [2024-08-11 12:53:48.640781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.173 [2024-08-11 12:53:48.640953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.173 [2024-08-11 12:53:48.641132] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.173 [2024-08-11 12:53:48.641222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.173 [2024-08-11 12:53:48.641451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.173 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:57.173 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.431 12:53:48 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:57.431 12:53:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.431 12:53:48 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.431 12:53:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:57.431 12:53:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:57.431 12:53:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.431 12:53:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.431 12:53:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.431 12:53:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:57.690 12:53:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:57.690 12:53:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.690 12:53:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@713 -- # time=45.02 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@714 -- # echo 45.02 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@716 -- # return 0 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.02 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.02 2 00:12:09.898 remove_attach_helper took 45.02s to complete (handling 2 nvme drive(s)) 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:09.898 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78446 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 78446 ']' 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 78446 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78446 00:12:09.898 killing process with pid 78446 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78446' 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@965 -- # kill 78446 00:12:09.898 12:54:01 sw_hotplug -- common/autotest_common.sh@970 -- # wait 78446 00:12:10.169 12:54:01 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:10.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.008 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:11.008 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:11.008 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.008 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.008 00:12:11.008 real 2m29.381s 00:12:11.008 user 1m48.074s 00:12:11.008 sys 0m20.840s 00:12:11.008 12:54:02 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:11.008 ************************************ 00:12:11.008 12:54:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.008 END TEST sw_hotplug 00:12:11.008 ************************************ 00:12:11.008 12:54:02 -- spdk/autotest.sh@256 -- # [[ 1 -eq 1 ]] 00:12:11.008 12:54:02 -- spdk/autotest.sh@257 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:11.008 12:54:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:11.008 12:54:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.008 12:54:02 -- common/autotest_common.sh@10 -- # set +x 00:12:11.267 ************************************ 00:12:11.267 START TEST nvme_xnvme 00:12:11.267 ************************************ 00:12:11.267 12:54:02 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:11.267 * Looking for test storage... 00:12:11.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:11.267 12:54:02 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.267 12:54:02 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.267 12:54:02 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.267 12:54:02 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.267 12:54:02 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.267 12:54:02 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.267 12:54:02 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.267 12:54:02 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:11.268 12:54:02 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.268 12:54:02 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:11.268 12:54:02 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:11.268 12:54:02 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.268 12:54:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.268 ************************************ 00:12:11.268 START TEST xnvme_to_malloc_dd_copy 00:12:11.268 ************************************ 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:11.268 12:54:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:11.268 { 00:12:11.268 "subsystems": [ 00:12:11.268 { 00:12:11.268 "subsystem": "bdev", 00:12:11.268 "config": [ 00:12:11.268 { 00:12:11.268 "params": { 00:12:11.268 "block_size": 512, 00:12:11.268 "num_blocks": 2097152, 00:12:11.268 "name": "malloc0" 00:12:11.268 }, 00:12:11.268 "method": "bdev_malloc_create" 00:12:11.268 }, 00:12:11.268 { 00:12:11.268 "params": { 00:12:11.268 "io_mechanism": "libaio", 00:12:11.268 "filename": "/dev/nullb0", 00:12:11.268 "name": "null0" 00:12:11.268 }, 00:12:11.268 "method": "bdev_xnvme_create" 00:12:11.268 }, 00:12:11.268 { 00:12:11.268 "method": "bdev_wait_for_examine" 00:12:11.268 } 00:12:11.268 ] 00:12:11.268 } 00:12:11.268 ] 00:12:11.268 } 00:12:11.268 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:11.268 [2024-08-11 12:54:02.828967] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:11.268 [2024-08-11 12:54:02.829851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79793 ] 00:12:11.527 [2024-08-11 12:54:02.979451] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.527 [2024-08-11 12:54:03.025048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.666  Copying: 159/1024 [MB] (159 MBps) Copying: 293/1024 [MB] (134 MBps) Copying: 458/1024 [MB] (165 MBps) Copying: 620/1024 [MB] (161 MBps) Copying: 797/1024 [MB] (176 MBps) Copying: 975/1024 [MB] (178 MBps) Copying: 1024/1024 [MB] (average 163 MBps) 00:12:18.666 00:12:18.666 12:54:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:18.666 12:54:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:18.666 12:54:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:18.666 12:54:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:18.666 { 00:12:18.666 "subsystems": [ 00:12:18.666 { 00:12:18.666 "subsystem": "bdev", 00:12:18.666 "config": [ 00:12:18.666 { 00:12:18.666 "params": { 00:12:18.666 "block_size": 512, 00:12:18.666 "num_blocks": 2097152, 00:12:18.666 "name": "malloc0" 00:12:18.666 }, 00:12:18.666 "method": "bdev_malloc_create" 00:12:18.666 }, 00:12:18.666 { 00:12:18.666 "params": { 00:12:18.666 "io_mechanism": "libaio", 00:12:18.666 "filename": "/dev/nullb0", 00:12:18.666 "name": "null0" 00:12:18.666 }, 00:12:18.666 "method": "bdev_xnvme_create" 00:12:18.666 }, 00:12:18.666 { 00:12:18.666 "method": "bdev_wait_for_examine" 00:12:18.666 } 00:12:18.666 ] 00:12:18.666 } 00:12:18.666 ] 00:12:18.666 } 00:12:18.666 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:18.666 [2024-08-11 12:54:10.091127] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:18.666 [2024-08-11 12:54:10.091303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79880 ] 00:12:18.666 [2024-08-11 12:54:10.238622] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.925 [2024-08-11 12:54:10.281371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.916  Copying: 188/1024 [MB] (188 MBps) Copying: 378/1024 [MB] (189 MBps) Copying: 557/1024 [MB] (178 MBps) Copying: 749/1024 [MB] (192 MBps) Copying: 936/1024 [MB] (186 MBps) Copying: 1024/1024 [MB] (average 188 MBps) 00:12:24.916 00:12:24.916 12:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:24.916 12:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:24.916 12:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:24.916 12:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:24.916 12:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:24.916 12:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.916 { 00:12:24.916 "subsystems": [ 00:12:24.916 { 00:12:24.916 "subsystem": "bdev", 00:12:24.916 "config": [ 00:12:24.916 { 00:12:24.916 "params": { 00:12:24.916 "block_size": 512, 00:12:24.916 "num_blocks": 2097152, 00:12:24.916 "name": "malloc0" 00:12:24.916 }, 00:12:24.916 "method": "bdev_malloc_create" 00:12:24.916 }, 00:12:24.916 { 00:12:24.916 "params": { 00:12:24.916 "io_mechanism": "io_uring", 00:12:24.916 "filename": "/dev/nullb0", 00:12:24.916 "name": "null0" 00:12:24.916 }, 00:12:24.916 "method": "bdev_xnvme_create" 00:12:24.916 }, 00:12:24.916 { 00:12:24.916 "method": "bdev_wait_for_examine" 00:12:24.916 } 00:12:24.916 ] 00:12:24.916 } 00:12:24.916 ] 00:12:24.916 } 00:12:24.916 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:24.916 [2024-08-11 12:54:16.495916] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:24.916 [2024-08-11 12:54:16.496126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79951 ] 00:12:25.174 [2024-08-11 12:54:16.646794] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.174 [2024-08-11 12:54:16.688072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.009  Copying: 197/1024 [MB] (197 MBps) Copying: 391/1024 [MB] (193 MBps) Copying: 584/1024 [MB] (192 MBps) Copying: 781/1024 [MB] (197 MBps) Copying: 971/1024 [MB] (189 MBps) Copying: 1024/1024 [MB] (average 193 MBps) 00:12:31.009 00:12:31.009 12:54:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:31.009 12:54:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:31.009 12:54:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:31.009 12:54:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.268 { 00:12:31.268 "subsystems": [ 00:12:31.268 { 00:12:31.268 "subsystem": "bdev", 00:12:31.268 "config": [ 00:12:31.268 { 00:12:31.268 "params": { 00:12:31.268 "block_size": 512, 00:12:31.268 "num_blocks": 2097152, 00:12:31.268 "name": "malloc0" 00:12:31.268 }, 00:12:31.268 "method": "bdev_malloc_create" 00:12:31.268 }, 00:12:31.268 { 00:12:31.268 "params": { 00:12:31.268 "io_mechanism": "io_uring", 00:12:31.268 "filename": "/dev/nullb0", 00:12:31.268 "name": "null0" 00:12:31.268 }, 00:12:31.268 "method": "bdev_xnvme_create" 00:12:31.268 }, 00:12:31.268 { 00:12:31.268 "method": "bdev_wait_for_examine" 00:12:31.268 } 00:12:31.268 ] 00:12:31.268 } 00:12:31.268 ] 00:12:31.268 } 00:12:31.268 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:31.268 [2024-08-11 12:54:22.682180] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:31.268 [2024-08-11 12:54:22.682400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80027 ] 00:12:31.268 [2024-08-11 12:54:22.832069] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.526 [2024-08-11 12:54:22.876149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.468  Copying: 190/1024 [MB] (190 MBps) Copying: 377/1024 [MB] (187 MBps) Copying: 572/1024 [MB] (194 MBps) Copying: 760/1024 [MB] (188 MBps) Copying: 932/1024 [MB] (171 MBps) Copying: 1024/1024 [MB] (average 185 MBps) 00:12:37.468 00:12:37.468 12:54:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:37.468 12:54:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:37.726 00:12:37.726 real 0m26.386s 00:12:37.726 user 0m21.284s 00:12:37.727 sys 0m4.604s 00:12:37.727 12:54:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:37.727 ************************************ 00:12:37.727 END TEST xnvme_to_malloc_dd_copy 00:12:37.727 ************************************ 00:12:37.727 12:54:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:37.727 12:54:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:37.727 12:54:29 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:37.727 12:54:29 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.727 12:54:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.727 ************************************ 00:12:37.727 START TEST xnvme_bdevperf 00:12:37.727 ************************************ 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.727 12:54:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.727 { 00:12:37.727 "subsystems": [ 00:12:37.727 { 00:12:37.727 "subsystem": "bdev", 00:12:37.727 "config": [ 00:12:37.727 { 00:12:37.727 "params": { 00:12:37.727 "io_mechanism": "libaio", 00:12:37.727 "filename": "/dev/nullb0", 00:12:37.727 "name": "null0" 00:12:37.727 }, 00:12:37.727 "method": "bdev_xnvme_create" 00:12:37.727 }, 00:12:37.727 { 00:12:37.727 "method": "bdev_wait_for_examine" 00:12:37.727 } 00:12:37.727 ] 00:12:37.727 } 00:12:37.727 ] 00:12:37.727 } 00:12:37.727 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:37.727 [2024-08-11 12:54:29.262994] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:37.727 [2024-08-11 12:54:29.263167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80132 ] 00:12:37.985 [2024-08-11 12:54:29.411072] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.986 [2024-08-11 12:54:29.454973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.986 Running I/O for 5 seconds... 00:12:43.252 00:12:43.252 Latency(us) 00:12:43.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.252 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.252 null0 : 5.00 120481.42 470.63 0.00 0.00 527.93 204.80 1333.06 00:12:43.252 =================================================================================================================== 00:12:43.252 Total : 120481.42 470.63 0.00 0.00 527.93 204.80 1333.06 00:12:43.252 12:54:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:43.252 12:54:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:43.252 12:54:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:43.252 12:54:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:43.252 12:54:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:43.252 12:54:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.252 { 00:12:43.252 "subsystems": [ 00:12:43.252 { 00:12:43.252 "subsystem": "bdev", 00:12:43.252 "config": [ 00:12:43.252 { 00:12:43.252 "params": { 00:12:43.252 "io_mechanism": "io_uring", 00:12:43.252 "filename": "/dev/nullb0", 00:12:43.252 "name": "null0" 00:12:43.252 }, 00:12:43.252 "method": "bdev_xnvme_create" 00:12:43.252 }, 00:12:43.252 { 00:12:43.252 "method": "bdev_wait_for_examine" 00:12:43.252 } 00:12:43.252 ] 00:12:43.252 } 00:12:43.252 ] 00:12:43.252 } 00:12:43.252 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:43.252 [2024-08-11 12:54:34.838626] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:43.252 [2024-08-11 12:54:34.838816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80195 ] 00:12:43.511 [2024-08-11 12:54:34.986640] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.511 [2024-08-11 12:54:35.031032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.769 Running I/O for 5 seconds... 00:12:49.099 00:12:49.099 Latency(us) 00:12:49.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.099 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:49.099 null0 : 5.00 160049.88 625.19 0.00 0.00 396.80 214.11 1124.54 00:12:49.099 =================================================================================================================== 00:12:49.099 Total : 160049.88 625.19 0.00 0.00 396.80 214.11 1124.54 00:12:49.099 12:54:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:49.099 12:54:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:49.099 00:12:49.099 real 0m11.171s 00:12:49.099 user 0m8.225s 00:12:49.099 sys 0m2.734s 00:12:49.100 12:54:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.100 12:54:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:49.100 ************************************ 00:12:49.100 END TEST xnvme_bdevperf 00:12:49.100 ************************************ 00:12:49.100 00:12:49.100 real 0m37.759s 00:12:49.100 user 0m29.580s 00:12:49.100 sys 0m7.454s 00:12:49.100 12:54:40 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.100 12:54:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.100 ************************************ 00:12:49.100 END TEST nvme_xnvme 00:12:49.100 ************************************ 00:12:49.100 12:54:40 -- spdk/autotest.sh@258 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:49.100 12:54:40 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:49.100 12:54:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.100 12:54:40 -- common/autotest_common.sh@10 -- # set +x 00:12:49.100 ************************************ 00:12:49.100 START TEST blockdev_xnvme 00:12:49.100 ************************************ 00:12:49.100 12:54:40 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:49.100 * Looking for test storage... 00:12:49.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80324 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80324 00:12:49.100 12:54:40 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:49.100 12:54:40 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 80324 ']' 00:12:49.100 12:54:40 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.100 12:54:40 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:49.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.100 12:54:40 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.100 12:54:40 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:49.100 12:54:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.100 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:49.100 [2024-08-11 12:54:40.628405] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:49.100 [2024-08-11 12:54:40.628589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80324 ] 00:12:49.358 [2024-08-11 12:54:40.775551] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.358 [2024-08-11 12:54:40.810882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.293 12:54:41 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:50.293 12:54:41 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:12:50.293 12:54:41 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:50.293 12:54:41 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:50.293 12:54:41 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:50.293 12:54:41 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:50.293 12:54:41 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:50.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:50.551 Waiting for block devices as requested 00:12:50.809 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.809 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.809 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.068 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.340 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:56.340 12:54:47 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.340 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:56.341 nvme0n1 00:12:56.341 nvme1n1 00:12:56.341 nvme2n1 00:12:56.341 nvme2n2 00:12:56.341 nvme2n3 00:12:56.341 nvme3n1 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "50fcff7d-d4bc-4a46-9df8-f3b42c497f58"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "50fcff7d-d4bc-4a46-9df8-f3b42c497f58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "4d06c94b-ef46-4b49-9031-3a7c6e076434"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4d06c94b-ef46-4b49-9031-3a7c6e076434",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "237fa065-8c37-43f6-976f-a76317106c77"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "237fa065-8c37-43f6-976f-a76317106c77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ffdde342-abc9-4f05-9be2-b6a5d7a73653"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffdde342-abc9-4f05-9be2-b6a5d7a73653",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "e3be06e5-c3dd-4b64-87fe-7fb67727efcc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3be06e5-c3dd-4b64-87fe-7fb67727efcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "98eef367-cbe3-4c17-a86f-07a400c2b5d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "98eef367-cbe3-4c17-a86f-07a400c2b5d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:56.341 12:54:47 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80324 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 80324 ']' 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 80324 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80324 00:12:56.341 killing process with pid 80324 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80324' 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 80324 00:12:56.341 12:54:47 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 80324 00:12:56.600 12:54:48 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:56.600 12:54:48 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:56.600 12:54:48 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:56.600 12:54:48 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.600 12:54:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.600 ************************************ 00:12:56.600 START TEST bdev_hello_world 00:12:56.600 ************************************ 00:12:56.600 12:54:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:56.600 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:56.600 [2024-08-11 12:54:48.189087] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:56.600 [2024-08-11 12:54:48.189288] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80671 ] 00:12:56.858 [2024-08-11 12:54:48.340070] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.858 [2024-08-11 12:54:48.375542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.117 [2024-08-11 12:54:48.544324] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:57.117 [2024-08-11 12:54:48.544388] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:57.117 [2024-08-11 12:54:48.544454] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:57.117 [2024-08-11 12:54:48.546752] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:57.117 [2024-08-11 12:54:48.547122] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:57.117 [2024-08-11 12:54:48.547155] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:57.117 [2024-08-11 12:54:48.547466] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:57.117 00:12:57.117 [2024-08-11 12:54:48.547514] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:57.376 00:12:57.376 real 0m0.638s 00:12:57.376 user 0m0.357s 00:12:57.376 sys 0m0.171s 00:12:57.376 ************************************ 00:12:57.376 END TEST bdev_hello_world 00:12:57.376 ************************************ 00:12:57.376 12:54:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.376 12:54:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:57.376 12:54:48 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:57.376 12:54:48 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:57.376 12:54:48 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.376 12:54:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.376 ************************************ 00:12:57.376 START TEST bdev_bounds 00:12:57.376 ************************************ 00:12:57.376 Process bdevio pid: 80697 00:12:57.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.376 12:54:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:12:57.376 12:54:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=80697 00:12:57.376 12:54:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:57.376 12:54:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 80697' 00:12:57.376 12:54:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 80697 00:12:57.376 12:54:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 80697 ']' 00:12:57.376 12:54:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.377 12:54:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:57.377 12:54:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.377 12:54:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.377 12:54:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.377 12:54:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:57.377 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:57.377 [2024-08-11 12:54:48.852347] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:57.377 [2024-08-11 12:54:48.852731] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80697 ] 00:12:57.659 [2024-08-11 12:54:48.995062] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.659 [2024-08-11 12:54:49.031722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.659 [2024-08-11 12:54:49.031807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.659 [2024-08-11 12:54:49.031933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.613 12:54:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:58.613 12:54:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:12:58.613 12:54:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:58.613 I/O targets: 00:12:58.613 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:58.613 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:58.613 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.613 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.613 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:58.613 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:58.613 00:12:58.613 00:12:58.613 CUnit - A unit testing framework for C - Version 2.1-3 00:12:58.613 http://cunit.sourceforge.net/ 00:12:58.613 00:12:58.613 00:12:58.613 Suite: bdevio tests on: nvme3n1 00:12:58.613 Test: blockdev write read block ...passed 00:12:58.613 Test: blockdev write zeroes read block ...passed 00:12:58.613 Test: blockdev write zeroes read no split ...passed 00:12:58.613 Test: blockdev write zeroes read split ...passed 00:12:58.613 Test: blockdev write zeroes read split partial ...passed 00:12:58.613 Test: blockdev reset ...passed 00:12:58.613 Test: blockdev write read 8 blocks ...passed 00:12:58.613 Test: blockdev write read size > 128k ...passed 00:12:58.613 Test: blockdev write read invalid size ...passed 00:12:58.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.613 Test: blockdev write read max offset ...passed 00:12:58.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.613 Test: blockdev writev readv 8 blocks ...passed 00:12:58.613 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.613 Test: blockdev writev readv block ...passed 00:12:58.613 Test: blockdev writev readv size > 128k ...passed 00:12:58.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.613 Test: blockdev comparev and writev ...passed 00:12:58.613 Test: blockdev nvme passthru rw ...passed 00:12:58.613 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.613 Test: blockdev nvme admin passthru ...passed 00:12:58.613 Test: blockdev copy ...passed 00:12:58.613 Suite: bdevio tests on: nvme2n3 00:12:58.613 Test: blockdev write read block ...passed 00:12:58.614 Test: blockdev write zeroes read block ...passed 00:12:58.614 Test: blockdev write zeroes read no split ...passed 00:12:58.614 Test: blockdev write zeroes read split ...passed 00:12:58.614 Test: blockdev write zeroes read split partial ...passed 00:12:58.614 Test: blockdev reset ...passed 00:12:58.614 Test: blockdev write read 8 blocks ...passed 00:12:58.614 Test: blockdev write read size > 128k ...passed 00:12:58.614 Test: blockdev write read invalid size ...passed 00:12:58.614 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.614 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.614 Test: blockdev write read max offset ...passed 00:12:58.614 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.614 Test: blockdev writev readv 8 blocks ...passed 00:12:58.614 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.614 Test: blockdev writev readv block ...passed 00:12:58.614 Test: blockdev writev readv size > 128k ...passed 00:12:58.614 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.614 Test: blockdev comparev and writev ...passed 00:12:58.614 Test: blockdev nvme passthru rw ...passed 00:12:58.614 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.614 Test: blockdev nvme admin passthru ...passed 00:12:58.614 Test: blockdev copy ...passed 00:12:58.614 Suite: bdevio tests on: nvme2n2 00:12:58.614 Test: blockdev write read block ...passed 00:12:58.614 Test: blockdev write zeroes read block ...passed 00:12:58.614 Test: blockdev write zeroes read no split ...passed 00:12:58.614 Test: blockdev write zeroes read split ...passed 00:12:58.614 Test: blockdev write zeroes read split partial ...passed 00:12:58.614 Test: blockdev reset ...passed 00:12:58.614 Test: blockdev write read 8 blocks ...passed 00:12:58.614 Test: blockdev write read size > 128k ...passed 00:12:58.614 Test: blockdev write read invalid size ...passed 00:12:58.614 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.614 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.614 Test: blockdev write read max offset ...passed 00:12:58.614 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.614 Test: blockdev writev readv 8 blocks ...passed 00:12:58.614 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.614 Test: blockdev writev readv block ...passed 00:12:58.614 Test: blockdev writev readv size > 128k ...passed 00:12:58.614 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.614 Test: blockdev comparev and writev ...passed 00:12:58.614 Test: blockdev nvme passthru rw ...passed 00:12:58.614 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.614 Test: blockdev nvme admin passthru ...passed 00:12:58.614 Test: blockdev copy ...passed 00:12:58.614 Suite: bdevio tests on: nvme2n1 00:12:58.614 Test: blockdev write read block ...passed 00:12:58.614 Test: blockdev write zeroes read block ...passed 00:12:58.614 Test: blockdev write zeroes read no split ...passed 00:12:58.614 Test: blockdev write zeroes read split ...passed 00:12:58.614 Test: blockdev write zeroes read split partial ...passed 00:12:58.614 Test: blockdev reset ...passed 00:12:58.614 Test: blockdev write read 8 blocks ...passed 00:12:58.614 Test: blockdev write read size > 128k ...passed 00:12:58.614 Test: blockdev write read invalid size ...passed 00:12:58.614 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.614 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.614 Test: blockdev write read max offset ...passed 00:12:58.614 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.614 Test: blockdev writev readv 8 blocks ...passed 00:12:58.614 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.614 Test: blockdev writev readv block ...passed 00:12:58.614 Test: blockdev writev readv size > 128k ...passed 00:12:58.614 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.614 Test: blockdev comparev and writev ...passed 00:12:58.614 Test: blockdev nvme passthru rw ...passed 00:12:58.614 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.614 Test: blockdev nvme admin passthru ...passed 00:12:58.614 Test: blockdev copy ...passed 00:12:58.614 Suite: bdevio tests on: nvme1n1 00:12:58.614 Test: blockdev write read block ...passed 00:12:58.614 Test: blockdev write zeroes read block ...passed 00:12:58.614 Test: blockdev write zeroes read no split ...passed 00:12:58.614 Test: blockdev write zeroes read split ...passed 00:12:58.614 Test: blockdev write zeroes read split partial ...passed 00:12:58.614 Test: blockdev reset ...passed 00:12:58.614 Test: blockdev write read 8 blocks ...passed 00:12:58.614 Test: blockdev write read size > 128k ...passed 00:12:58.614 Test: blockdev write read invalid size ...passed 00:12:58.614 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.614 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.614 Test: blockdev write read max offset ...passed 00:12:58.614 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.614 Test: blockdev writev readv 8 blocks ...passed 00:12:58.614 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.614 Test: blockdev writev readv block ...passed 00:12:58.614 Test: blockdev writev readv size > 128k ...passed 00:12:58.614 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.614 Test: blockdev comparev and writev ...passed 00:12:58.614 Test: blockdev nvme passthru rw ...passed 00:12:58.614 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.614 Test: blockdev nvme admin passthru ...passed 00:12:58.614 Test: blockdev copy ...passed 00:12:58.614 Suite: bdevio tests on: nvme0n1 00:12:58.614 Test: blockdev write read block ...passed 00:12:58.614 Test: blockdev write zeroes read block ...passed 00:12:58.614 Test: blockdev write zeroes read no split ...passed 00:12:58.614 Test: blockdev write zeroes read split ...passed 00:12:58.614 Test: blockdev write zeroes read split partial ...passed 00:12:58.614 Test: blockdev reset ...passed 00:12:58.614 Test: blockdev write read 8 blocks ...passed 00:12:58.614 Test: blockdev write read size > 128k ...passed 00:12:58.614 Test: blockdev write read invalid size ...passed 00:12:58.614 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.614 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.614 Test: blockdev write read max offset ...passed 00:12:58.614 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:58.614 Test: blockdev writev readv 8 blocks ...passed 00:12:58.614 Test: blockdev writev readv 30 x 1block ...passed 00:12:58.614 Test: blockdev writev readv block ...passed 00:12:58.614 Test: blockdev writev readv size > 128k ...passed 00:12:58.614 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:58.614 Test: blockdev comparev and writev ...passed 00:12:58.614 Test: blockdev nvme passthru rw ...passed 00:12:58.614 Test: blockdev nvme passthru vendor specific ...passed 00:12:58.614 Test: blockdev nvme admin passthru ...passed 00:12:58.614 Test: blockdev copy ...passed 00:12:58.614 00:12:58.614 Run Summary: Type Total Ran Passed Failed Inactive 00:12:58.614 suites 6 6 n/a 0 0 00:12:58.614 tests 138 138 138 0 0 00:12:58.614 asserts 780 780 780 0 n/a 00:12:58.614 00:12:58.614 Elapsed time = 0.287 seconds 00:12:58.614 0 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 80697 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 80697 ']' 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 80697 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80697 00:12:58.614 killing process with pid 80697 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80697' 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 80697 00:12:58.614 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 80697 00:12:58.873 ************************************ 00:12:58.873 END TEST bdev_bounds 00:12:58.873 ************************************ 00:12:58.873 12:54:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:58.873 00:12:58.873 real 0m1.565s 00:12:58.873 user 0m4.179s 00:12:58.873 sys 0m0.308s 00:12:58.873 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:58.873 12:54:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:58.873 12:54:50 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:58.873 12:54:50 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:58.873 12:54:50 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.873 12:54:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.873 ************************************ 00:12:58.873 START TEST bdev_nbd 00:12:58.873 ************************************ 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:58.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:58.873 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=80747 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 80747 /var/tmp/spdk-nbd.sock 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 80747 ']' 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:58.874 12:54:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:59.132 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:12:59.132 [2024-08-11 12:54:50.479956] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:12:59.132 [2024-08-11 12:54:50.480104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.132 [2024-08-11 12:54:50.620021] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.132 [2024-08-11 12:54:50.655403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:00.067 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:00.068 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.326 1+0 records in 00:13:00.326 1+0 records out 00:13:00.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491844 s, 8.3 MB/s 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:00.326 12:54:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.585 1+0 records in 00:13:00.585 1+0 records out 00:13:00.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744424 s, 5.5 MB/s 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:00.585 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.843 1+0 records in 00:13:00.843 1+0 records out 00:13:00.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061226 s, 6.7 MB/s 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:00.843 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:00.844 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.844 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.102 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.361 1+0 records in 00:13:01.361 1+0 records out 00:13:01.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527344 s, 7.8 MB/s 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.361 12:54:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.620 1+0 records in 00:13:01.620 1+0 records out 00:13:01.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000913108 s, 4.5 MB/s 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.620 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.878 1+0 records in 00:13:01.878 1+0 records out 00:13:01.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838689 s, 4.9 MB/s 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.878 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd0", 00:13:02.136 "bdev_name": "nvme0n1" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd1", 00:13:02.136 "bdev_name": "nvme1n1" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd2", 00:13:02.136 "bdev_name": "nvme2n1" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd3", 00:13:02.136 "bdev_name": "nvme2n2" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd4", 00:13:02.136 "bdev_name": "nvme2n3" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd5", 00:13:02.136 "bdev_name": "nvme3n1" 00:13:02.136 } 00:13:02.136 ]' 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd0", 00:13:02.136 "bdev_name": "nvme0n1" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd1", 00:13:02.136 "bdev_name": "nvme1n1" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd2", 00:13:02.136 "bdev_name": "nvme2n1" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd3", 00:13:02.136 "bdev_name": "nvme2n2" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd4", 00:13:02.136 "bdev_name": "nvme2n3" 00:13:02.136 }, 00:13:02.136 { 00:13:02.136 "nbd_device": "/dev/nbd5", 00:13:02.136 "bdev_name": "nvme3n1" 00:13:02.136 } 00:13:02.136 ]' 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.136 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.395 12:54:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.654 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.221 12:54:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:03.480 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.739 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:03.997 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.998 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.257 12:54:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:04.516 /dev/nbd0 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.774 1+0 records in 00:13:04.774 1+0 records out 00:13:04.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632992 s, 6.5 MB/s 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.774 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.775 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:05.033 /dev/nbd1 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.033 1+0 records in 00:13:05.033 1+0 records out 00:13:05.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653416 s, 6.3 MB/s 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.033 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:05.292 /dev/nbd10 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:05.292 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.292 1+0 records in 00:13:05.292 1+0 records out 00:13:05.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624259 s, 6.6 MB/s 00:13:05.293 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.293 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:05.293 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.293 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:05.293 12:54:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:05.293 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.293 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.293 12:54:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:05.551 /dev/nbd11 00:13:05.551 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.810 1+0 records in 00:13:05.810 1+0 records out 00:13:05.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762513 s, 5.4 MB/s 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.810 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:06.068 /dev/nbd12 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:06.068 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.069 1+0 records in 00:13:06.069 1+0 records out 00:13:06.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606043 s, 6.8 MB/s 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.069 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:06.327 /dev/nbd13 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.327 1+0 records in 00:13:06.327 1+0 records out 00:13:06.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593427 s, 6.9 MB/s 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.327 12:54:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.585 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:06.585 { 00:13:06.585 "nbd_device": "/dev/nbd0", 00:13:06.585 "bdev_name": "nvme0n1" 00:13:06.585 }, 00:13:06.585 { 00:13:06.585 "nbd_device": "/dev/nbd1", 00:13:06.585 "bdev_name": "nvme1n1" 00:13:06.585 }, 00:13:06.585 { 00:13:06.585 "nbd_device": "/dev/nbd10", 00:13:06.585 "bdev_name": "nvme2n1" 00:13:06.585 }, 00:13:06.585 { 00:13:06.585 "nbd_device": "/dev/nbd11", 00:13:06.585 "bdev_name": "nvme2n2" 00:13:06.585 }, 00:13:06.585 { 00:13:06.585 "nbd_device": "/dev/nbd12", 00:13:06.585 "bdev_name": "nvme2n3" 00:13:06.585 }, 00:13:06.586 { 00:13:06.586 "nbd_device": "/dev/nbd13", 00:13:06.586 "bdev_name": "nvme3n1" 00:13:06.586 } 00:13:06.586 ]' 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:06.586 { 00:13:06.586 "nbd_device": "/dev/nbd0", 00:13:06.586 "bdev_name": "nvme0n1" 00:13:06.586 }, 00:13:06.586 { 00:13:06.586 "nbd_device": "/dev/nbd1", 00:13:06.586 "bdev_name": "nvme1n1" 00:13:06.586 }, 00:13:06.586 { 00:13:06.586 "nbd_device": "/dev/nbd10", 00:13:06.586 "bdev_name": "nvme2n1" 00:13:06.586 }, 00:13:06.586 { 00:13:06.586 "nbd_device": "/dev/nbd11", 00:13:06.586 "bdev_name": "nvme2n2" 00:13:06.586 }, 00:13:06.586 { 00:13:06.586 "nbd_device": "/dev/nbd12", 00:13:06.586 "bdev_name": "nvme2n3" 00:13:06.586 }, 00:13:06.586 { 00:13:06.586 "nbd_device": "/dev/nbd13", 00:13:06.586 "bdev_name": "nvme3n1" 00:13:06.586 } 00:13:06.586 ]' 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:06.586 /dev/nbd1 00:13:06.586 /dev/nbd10 00:13:06.586 /dev/nbd11 00:13:06.586 /dev/nbd12 00:13:06.586 /dev/nbd13' 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:06.586 /dev/nbd1 00:13:06.586 /dev/nbd10 00:13:06.586 /dev/nbd11 00:13:06.586 /dev/nbd12 00:13:06.586 /dev/nbd13' 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:06.586 256+0 records in 00:13:06.586 256+0 records out 00:13:06.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00778293 s, 135 MB/s 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.586 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:06.845 256+0 records in 00:13:06.845 256+0 records out 00:13:06.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140029 s, 7.5 MB/s 00:13:06.845 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.845 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:07.104 256+0 records in 00:13:07.104 256+0 records out 00:13:07.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161828 s, 6.5 MB/s 00:13:07.104 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.104 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:07.104 256+0 records in 00:13:07.104 256+0 records out 00:13:07.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133361 s, 7.9 MB/s 00:13:07.104 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.104 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:07.363 256+0 records in 00:13:07.363 256+0 records out 00:13:07.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145068 s, 7.2 MB/s 00:13:07.363 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.363 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:07.363 256+0 records in 00:13:07.363 256+0 records out 00:13:07.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134891 s, 7.8 MB/s 00:13:07.363 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.363 12:54:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:07.621 256+0 records in 00:13:07.621 256+0 records out 00:13:07.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148476 s, 7.1 MB/s 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.621 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.880 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.139 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:08.742 12:54:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.742 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.307 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:09.564 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:09.564 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:09.564 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:09.564 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.564 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.565 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:09.565 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.565 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.565 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:09.565 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.565 12:55:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:09.822 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:09.823 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:09.823 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.823 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.823 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:09.823 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:09.823 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:10.081 malloc_lvol_verify 00:13:10.340 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:10.598 85aa262e-a770-4fa4-875c-ad1096f67277 00:13:10.598 12:55:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:10.855 c35b8160-4372-4d31-928d-67ace0bfccf8 00:13:10.855 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:11.113 /dev/nbd0 00:13:11.113 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:11.113 mke2fs 1.47.0 (5-Feb-2023) 00:13:11.113 Discarding device blocks: 0/4096 done 00:13:11.113 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:11.113 00:13:11.113 Allocating group tables: 0/1 done 00:13:11.113 Writing inode tables: 0/1 done 00:13:11.113 Creating journal (1024 blocks): done 00:13:11.113 Writing superblocks and filesystem accounting information: 0/1 done 00:13:11.113 00:13:11.113 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:11.113 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:11.113 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.113 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:11.113 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.114 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:11.114 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.114 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 80747 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 80747 ']' 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 80747 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80747 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:11.372 killing process with pid 80747 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80747' 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 80747 00:13:11.372 12:55:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 80747 00:13:11.631 12:55:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:11.631 00:13:11.631 real 0m12.741s 00:13:11.631 user 0m18.868s 00:13:11.631 sys 0m4.329s 00:13:11.631 12:55:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:11.631 12:55:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:11.631 ************************************ 00:13:11.631 END TEST bdev_nbd 00:13:11.631 ************************************ 00:13:11.631 12:55:03 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:11.631 12:55:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:11.631 12:55:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:11.631 12:55:03 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:11.631 12:55:03 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:11.631 12:55:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:11.631 12:55:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.631 ************************************ 00:13:11.631 START TEST bdev_fio 00:13:11.631 ************************************ 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:11.631 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:13:11.631 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:11.890 12:55:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:11.890 ************************************ 00:13:11.890 START TEST bdev_fio_rw_verify 00:13:11.890 ************************************ 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:11.891 12:55:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:12.149 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.149 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.149 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.149 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.149 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.149 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.149 fio-3.35 00:13:12.149 Starting 6 threads 00:13:12.149 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:24.346 00:13:24.346 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81176: Sun Aug 11 12:55:14 2024 00:13:24.346 read: IOPS=26.6k, BW=104MiB/s (109MB/s)(1038MiB/10001msec) 00:13:24.346 slat (usec): min=2, max=462, avg= 7.22, stdev= 4.06 00:13:24.346 clat (usec): min=104, max=6098, avg=732.76, stdev=326.59 00:13:24.346 lat (usec): min=110, max=6107, avg=739.99, stdev=327.14 00:13:24.346 clat percentiles (usec): 00:13:24.346 | 50.000th=[ 734], 99.000th=[ 1663], 99.900th=[ 4621], 99.990th=[ 5866], 00:13:24.346 | 99.999th=[ 6063] 00:13:24.346 write: IOPS=26.7k, BW=104MiB/s (109MB/s)(1043MiB/10001msec); 0 zone resets 00:13:24.346 slat (usec): min=13, max=3831, avg=25.33, stdev=28.11 00:13:24.346 clat (usec): min=105, max=6266, avg=796.04, stdev=329.89 00:13:24.346 lat (usec): min=125, max=6318, avg=821.37, stdev=331.93 00:13:24.346 clat percentiles (usec): 00:13:24.346 | 50.000th=[ 791], 99.000th=[ 1729], 99.900th=[ 4883], 99.990th=[ 5997], 00:13:24.346 | 99.999th=[ 6259] 00:13:24.346 bw ( KiB/s): min=70512, max=136824, per=99.76%, avg=106499.79, stdev=2534.32, samples=114 00:13:24.346 iops : min=17628, max=34206, avg=26624.95, stdev=633.58, samples=114 00:13:24.346 lat (usec) : 250=1.65%, 500=12.51%, 750=33.15%, 1000=42.75% 00:13:24.346 lat (msec) : 2=9.26%, 4=0.49%, 10=0.19% 00:13:24.346 cpu : usr=63.32%, sys=24.85%, ctx=7807, majf=0, minf=24693 00:13:24.346 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.346 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.346 issued rwts: total=265608,266909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.346 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:24.346 00:13:24.346 Run status group 0 (all jobs): 00:13:24.346 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=1038MiB (1088MB), run=10001-10001msec 00:13:24.346 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=1043MiB (1093MB), run=10001-10001msec 00:13:24.346 ----------------------------------------------------- 00:13:24.346 Suppressions used: 00:13:24.346 count bytes template 00:13:24.346 6 48 /usr/src/fio/parse.c 00:13:24.346 1118 107328 /usr/src/fio/iolog.c 00:13:24.346 1 8 libtcmalloc_minimal.so 00:13:24.346 1 904 libcrypto.so 00:13:24.346 ----------------------------------------------------- 00:13:24.346 00:13:24.346 00:13:24.346 real 0m11.180s 00:13:24.346 user 0m38.760s 00:13:24.346 sys 0m15.203s 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:24.347 ************************************ 00:13:24.347 END TEST bdev_fio_rw_verify 00:13:24.347 ************************************ 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "50fcff7d-d4bc-4a46-9df8-f3b42c497f58"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "50fcff7d-d4bc-4a46-9df8-f3b42c497f58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "4d06c94b-ef46-4b49-9031-3a7c6e076434"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4d06c94b-ef46-4b49-9031-3a7c6e076434",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "237fa065-8c37-43f6-976f-a76317106c77"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "237fa065-8c37-43f6-976f-a76317106c77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ffdde342-abc9-4f05-9be2-b6a5d7a73653"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffdde342-abc9-4f05-9be2-b6a5d7a73653",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "e3be06e5-c3dd-4b64-87fe-7fb67727efcc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3be06e5-c3dd-4b64-87fe-7fb67727efcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "98eef367-cbe3-4c17-a86f-07a400c2b5d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "98eef367-cbe3-4c17-a86f-07a400c2b5d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:24.347 /home/vagrant/spdk_repo/spdk 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:24.347 00:13:24.347 real 0m11.373s 00:13:24.347 user 0m38.868s 00:13:24.347 sys 0m15.287s 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.347 12:55:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:24.347 ************************************ 00:13:24.347 END TEST bdev_fio 00:13:24.347 ************************************ 00:13:24.347 12:55:14 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:24.347 12:55:14 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:24.347 12:55:14 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:24.347 12:55:14 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.347 12:55:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.347 ************************************ 00:13:24.347 START TEST bdev_verify 00:13:24.347 ************************************ 00:13:24.347 12:55:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:24.347 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:24.347 [2024-08-11 12:55:14.709513] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:24.347 [2024-08-11 12:55:14.709688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81335 ] 00:13:24.347 [2024-08-11 12:55:14.858634] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:24.347 [2024-08-11 12:55:14.898727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.347 [2024-08-11 12:55:14.898774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.347 Running I/O for 5 seconds... 00:13:29.612 00:13:29.612 Latency(us) 00:13:29.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.612 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x0 length 0xa0000 00:13:29.612 nvme0n1 : 5.06 1620.39 6.33 0.00 0.00 78835.03 12034.79 71017.19 00:13:29.612 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0xa0000 length 0xa0000 00:13:29.612 nvme0n1 : 5.08 1663.49 6.50 0.00 0.00 76798.45 10366.60 75783.45 00:13:29.612 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x0 length 0xbd0bd 00:13:29.612 nvme1n1 : 5.07 2835.10 11.07 0.00 0.00 44853.56 5540.77 65297.69 00:13:29.612 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:29.612 nvme1n1 : 5.07 2905.09 11.35 0.00 0.00 43792.55 4617.31 64821.06 00:13:29.612 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x0 length 0x80000 00:13:29.612 nvme2n1 : 5.06 1619.47 6.33 0.00 0.00 78432.51 14060.45 65774.31 00:13:29.612 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x80000 length 0x80000 00:13:29.612 nvme2n1 : 5.07 1665.91 6.51 0.00 0.00 76285.99 9651.67 70063.94 00:13:29.612 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x0 length 0x80000 00:13:29.612 nvme2n2 : 5.06 1618.77 6.32 0.00 0.00 78303.11 10545.34 74353.57 00:13:29.612 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x80000 length 0x80000 00:13:29.612 nvme2n2 : 5.08 1662.63 6.49 0.00 0.00 76296.77 16801.05 66250.94 00:13:29.612 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x0 length 0x80000 00:13:29.612 nvme2n3 : 5.07 1615.36 6.31 0.00 0.00 78310.37 13762.56 71493.82 00:13:29.612 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x80000 length 0x80000 00:13:29.612 nvme2n3 : 5.08 1664.57 6.50 0.00 0.00 76060.05 15728.64 69110.69 00:13:29.612 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x0 length 0x20000 00:13:29.612 nvme3n1 : 5.08 1637.47 6.40 0.00 0.00 77111.15 2815.07 76260.07 00:13:29.612 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:29.612 Verification LBA range: start 0x20000 length 0x20000 00:13:29.612 nvme3n1 : 5.09 1661.18 6.49 0.00 0.00 76072.84 10247.45 74353.57 00:13:29.612 =================================================================================================================== 00:13:29.612 Total : 22169.43 86.60 0.00 0.00 68717.90 2815.07 76260.07 00:13:29.612 00:13:29.612 real 0m5.859s 00:13:29.612 user 0m8.993s 00:13:29.612 sys 0m1.744s 00:13:29.612 12:55:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:29.612 ************************************ 00:13:29.612 END TEST bdev_verify 00:13:29.612 12:55:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:29.612 ************************************ 00:13:29.613 12:55:20 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:29.613 12:55:20 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:29.613 12:55:20 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:29.613 12:55:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.613 ************************************ 00:13:29.613 START TEST bdev_verify_big_io 00:13:29.613 ************************************ 00:13:29.613 12:55:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:29.613 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:29.613 [2024-08-11 12:55:20.616304] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:29.613 [2024-08-11 12:55:20.616520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81430 ] 00:13:29.613 [2024-08-11 12:55:20.768054] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.613 [2024-08-11 12:55:20.818891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.613 [2024-08-11 12:55:20.818912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.613 Running I/O for 5 seconds... 00:13:36.168 00:13:36.168 Latency(us) 00:13:36.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.168 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x0 length 0xa000 00:13:36.168 nvme0n1 : 6.09 116.66 7.29 0.00 0.00 1055768.35 127735.62 1715851.64 00:13:36.168 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0xa000 length 0xa000 00:13:36.168 nvme0n1 : 6.02 89.07 5.57 0.00 0.00 1396664.64 78643.20 2089525.99 00:13:36.168 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x0 length 0xbd0b 00:13:36.168 nvme1n1 : 6.09 136.67 8.54 0.00 0.00 863339.41 168725.41 1121023.07 00:13:36.168 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:36.168 nvme1n1 : 5.99 130.83 8.18 0.00 0.00 897081.49 9532.51 1326925.27 00:13:36.168 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x0 length 0x8000 00:13:36.168 nvme2n1 : 6.05 127.04 7.94 0.00 0.00 898677.29 165865.66 1273543.21 00:13:36.168 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x8000 length 0x8000 00:13:36.168 nvme2n1 : 6.01 135.83 8.49 0.00 0.00 831838.75 95325.09 911307.87 00:13:36.168 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x0 length 0x8000 00:13:36.168 nvme2n2 : 6.13 109.62 6.85 0.00 0.00 999945.31 79119.83 1204909.15 00:13:36.168 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x8000 length 0x8000 00:13:36.168 nvme2n2 : 6.02 115.62 7.23 0.00 0.00 972248.82 142987.64 1639591.56 00:13:36.168 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x0 length 0x8000 00:13:36.168 nvme2n3 : 6.09 112.95 7.06 0.00 0.00 934158.01 36223.53 2333558.23 00:13:36.168 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x8000 length 0x8000 00:13:36.168 nvme2n3 : 6.01 158.34 9.90 0.00 0.00 696279.70 12511.42 800730.76 00:13:36.168 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x0 length 0x2000 00:13:36.168 nvme3n1 : 6.20 142.00 8.87 0.00 0.00 721666.45 1474.56 2791118.66 00:13:36.168 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:36.168 Verification LBA range: start 0x2000 length 0x2000 00:13:36.168 nvme3n1 : 6.02 148.93 9.31 0.00 0.00 717885.64 11439.01 892242.85 00:13:36.168 =================================================================================================================== 00:13:36.168 Total : 1523.56 95.22 0.00 0.00 890713.11 1474.56 2791118.66 00:13:36.168 00:13:36.168 real 0m6.992s 00:13:36.168 user 0m12.749s 00:13:36.168 sys 0m0.522s 00:13:36.168 12:55:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:36.168 ************************************ 00:13:36.168 12:55:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.168 END TEST bdev_verify_big_io 00:13:36.168 ************************************ 00:13:36.168 12:55:27 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.168 12:55:27 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:36.168 12:55:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.168 12:55:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.168 ************************************ 00:13:36.168 START TEST bdev_write_zeroes 00:13:36.168 ************************************ 00:13:36.168 12:55:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.168 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:36.168 [2024-08-11 12:55:27.632593] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:36.169 [2024-08-11 12:55:27.632718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81534 ] 00:13:36.427 [2024-08-11 12:55:27.773420] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.427 [2024-08-11 12:55:27.807858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.427 Running I/O for 1 seconds... 00:13:37.804 00:13:37.804 Latency(us) 00:13:37.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.804 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.804 nvme0n1 : 1.02 10913.40 42.63 0.00 0.00 11715.10 7149.38 18230.92 00:13:37.804 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.804 nvme1n1 : 1.02 16188.76 63.24 0.00 0.00 7890.44 4081.11 13524.25 00:13:37.804 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.804 nvme2n1 : 1.02 10896.81 42.57 0.00 0.00 11659.26 7119.59 18826.71 00:13:37.804 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.804 nvme2n2 : 1.02 10880.84 42.50 0.00 0.00 11666.48 7119.59 19422.49 00:13:37.804 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.804 nvme2n3 : 1.02 10864.80 42.44 0.00 0.00 11673.37 7149.38 19779.96 00:13:37.804 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.804 nvme3n1 : 1.03 10848.98 42.38 0.00 0.00 11681.07 7179.17 20137.43 00:13:37.804 =================================================================================================================== 00:13:37.804 Total : 70593.61 275.76 0.00 0.00 10813.81 4081.11 20137.43 00:13:37.804 00:13:37.804 real 0m1.683s 00:13:37.804 user 0m1.006s 00:13:37.804 sys 0m0.502s 00:13:37.804 12:55:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:37.804 ************************************ 00:13:37.804 END TEST bdev_write_zeroes 00:13:37.804 12:55:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:37.804 ************************************ 00:13:37.804 12:55:29 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.804 12:55:29 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:37.804 12:55:29 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:37.804 12:55:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.804 ************************************ 00:13:37.804 START TEST bdev_json_nonenclosed 00:13:37.804 ************************************ 00:13:37.804 12:55:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.804 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:37.804 [2024-08-11 12:55:29.373542] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:37.804 [2024-08-11 12:55:29.374146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81566 ] 00:13:38.063 [2024-08-11 12:55:29.514255] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.063 [2024-08-11 12:55:29.550072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.063 [2024-08-11 12:55:29.550202] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:38.063 [2024-08-11 12:55:29.550236] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:38.063 [2024-08-11 12:55:29.550275] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:38.063 00:13:38.063 real 0m0.362s 00:13:38.063 user 0m0.155s 00:13:38.063 sys 0m0.102s 00:13:38.063 12:55:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:38.063 12:55:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:38.063 ************************************ 00:13:38.063 END TEST bdev_json_nonenclosed 00:13:38.063 ************************************ 00:13:38.321 12:55:29 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.321 12:55:29 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:38.321 12:55:29 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:38.321 12:55:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.321 ************************************ 00:13:38.321 START TEST bdev_json_nonarray 00:13:38.321 ************************************ 00:13:38.322 12:55:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.322 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:38.322 [2024-08-11 12:55:29.798400] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:38.322 [2024-08-11 12:55:29.798591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81597 ] 00:13:38.580 [2024-08-11 12:55:29.950485] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.580 [2024-08-11 12:55:29.992842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.580 [2024-08-11 12:55:29.992985] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:38.580 [2024-08-11 12:55:29.993027] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:38.580 [2024-08-11 12:55:29.993055] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:38.580 00:13:38.580 real 0m0.415s 00:13:38.580 user 0m0.187s 00:13:38.580 sys 0m0.112s 00:13:38.580 12:55:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:38.580 12:55:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:38.580 ************************************ 00:13:38.580 END TEST bdev_json_nonarray 00:13:38.580 ************************************ 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:38.580 12:55:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:39.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:44.442 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:44.442 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:44.443 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:44.443 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:44.443 00:13:44.443 real 0m54.723s 00:13:44.443 user 1m34.342s 00:13:44.443 sys 0m36.167s 00:13:44.443 12:55:35 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.443 12:55:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.443 ************************************ 00:13:44.443 END TEST blockdev_xnvme 00:13:44.443 ************************************ 00:13:44.443 12:55:35 -- spdk/autotest.sh@260 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:44.443 12:55:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:44.443 12:55:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.443 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:13:44.443 ************************************ 00:13:44.443 START TEST ublk 00:13:44.443 ************************************ 00:13:44.443 12:55:35 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:44.443 * Looking for test storage... 00:13:44.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:44.443 12:55:35 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:44.443 12:55:35 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:44.443 12:55:35 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:44.443 12:55:35 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:44.443 12:55:35 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:44.443 12:55:35 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:44.443 12:55:35 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:44.443 12:55:35 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:44.443 12:55:35 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:44.443 12:55:35 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:44.443 12:55:35 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.443 12:55:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.443 ************************************ 00:13:44.443 START TEST test_save_ublk_config 00:13:44.443 ************************************ 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=81874 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 81874 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 81874 ']' 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:44.443 12:55:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:44.443 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:44.443 [2024-08-11 12:55:35.396097] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:44.443 [2024-08-11 12:55:35.396279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81874 ] 00:13:44.443 [2024-08-11 12:55:35.545371] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.443 [2024-08-11 12:55:35.594977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:45.011 [2024-08-11 12:55:36.413899] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:45.011 [2024-08-11 12:55:36.414220] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:45.011 malloc0 00:13:45.011 [2024-08-11 12:55:36.446034] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:45.011 [2024-08-11 12:55:36.446129] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:45.011 [2024-08-11 12:55:36.446149] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:45.011 [2024-08-11 12:55:36.446159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:45.011 [2024-08-11 12:55:36.455012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:45.011 [2024-08-11 12:55:36.455040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:45.011 [2024-08-11 12:55:36.461938] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:45.011 [2024-08-11 12:55:36.462062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:45.011 [2024-08-11 12:55:36.478910] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:45.011 0 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:45.011 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:45.270 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:45.270 12:55:36 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:45.270 "subsystems": [ 00:13:45.270 { 00:13:45.270 "subsystem": "keyring", 00:13:45.270 "config": [] 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "subsystem": "iobuf", 00:13:45.270 "config": [ 00:13:45.270 { 00:13:45.270 "method": "iobuf_set_options", 00:13:45.270 "params": { 00:13:45.270 "small_pool_count": 8192, 00:13:45.270 "large_pool_count": 1024, 00:13:45.270 "small_bufsize": 8192, 00:13:45.270 "large_bufsize": 135168 00:13:45.270 } 00:13:45.270 } 00:13:45.270 ] 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "subsystem": "sock", 00:13:45.270 "config": [ 00:13:45.270 { 00:13:45.270 "method": "sock_set_default_impl", 00:13:45.270 "params": { 00:13:45.270 "impl_name": "posix" 00:13:45.270 } 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "method": "sock_impl_set_options", 00:13:45.270 "params": { 00:13:45.270 "impl_name": "ssl", 00:13:45.270 "recv_buf_size": 4096, 00:13:45.270 "send_buf_size": 4096, 00:13:45.270 "enable_recv_pipe": true, 00:13:45.270 "enable_quickack": false, 00:13:45.270 "enable_placement_id": 0, 00:13:45.270 "enable_zerocopy_send_server": true, 00:13:45.270 "enable_zerocopy_send_client": false, 00:13:45.270 "zerocopy_threshold": 0, 00:13:45.270 "tls_version": 0, 00:13:45.270 "enable_ktls": false 00:13:45.270 } 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "method": "sock_impl_set_options", 00:13:45.270 "params": { 00:13:45.270 "impl_name": "posix", 00:13:45.270 "recv_buf_size": 2097152, 00:13:45.270 "send_buf_size": 2097152, 00:13:45.270 "enable_recv_pipe": true, 00:13:45.270 "enable_quickack": false, 00:13:45.270 "enable_placement_id": 0, 00:13:45.270 "enable_zerocopy_send_server": true, 00:13:45.270 "enable_zerocopy_send_client": false, 00:13:45.270 "zerocopy_threshold": 0, 00:13:45.270 "tls_version": 0, 00:13:45.270 "enable_ktls": false 00:13:45.270 } 00:13:45.270 } 00:13:45.270 ] 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "subsystem": "vmd", 00:13:45.270 "config": [] 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "subsystem": "accel", 00:13:45.270 "config": [ 00:13:45.270 { 00:13:45.270 "method": "accel_set_options", 00:13:45.270 "params": { 00:13:45.270 "small_cache_size": 128, 00:13:45.270 "large_cache_size": 16, 00:13:45.270 "task_count": 2048, 00:13:45.270 "sequence_count": 2048, 00:13:45.270 "buf_count": 2048 00:13:45.270 } 00:13:45.270 } 00:13:45.270 ] 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "subsystem": "bdev", 00:13:45.270 "config": [ 00:13:45.270 { 00:13:45.270 "method": "bdev_set_options", 00:13:45.270 "params": { 00:13:45.270 "bdev_io_pool_size": 65535, 00:13:45.270 "bdev_io_cache_size": 256, 00:13:45.270 "bdev_auto_examine": true, 00:13:45.270 "iobuf_small_cache_size": 128, 00:13:45.270 "iobuf_large_cache_size": 16 00:13:45.270 } 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "method": "bdev_raid_set_options", 00:13:45.270 "params": { 00:13:45.270 "process_window_size_kb": 1024, 00:13:45.270 "process_max_bandwidth_mb_sec": 0 00:13:45.270 } 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "method": "bdev_iscsi_set_options", 00:13:45.270 "params": { 00:13:45.270 "timeout_sec": 30 00:13:45.270 } 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "method": "bdev_nvme_set_options", 00:13:45.270 "params": { 00:13:45.270 "action_on_timeout": "none", 00:13:45.270 "timeout_us": 0, 00:13:45.270 "timeout_admin_us": 0, 00:13:45.270 "keep_alive_timeout_ms": 10000, 00:13:45.270 "arbitration_burst": 0, 00:13:45.270 "low_priority_weight": 0, 00:13:45.270 "medium_priority_weight": 0, 00:13:45.270 "high_priority_weight": 0, 00:13:45.270 "nvme_adminq_poll_period_us": 10000, 00:13:45.270 "nvme_ioq_poll_period_us": 0, 00:13:45.270 "io_queue_requests": 0, 00:13:45.270 "delay_cmd_submit": true, 00:13:45.270 "transport_retry_count": 4, 00:13:45.270 "bdev_retry_count": 3, 00:13:45.270 "transport_ack_timeout": 0, 00:13:45.270 "ctrlr_loss_timeout_sec": 0, 00:13:45.270 "reconnect_delay_sec": 0, 00:13:45.270 "fast_io_fail_timeout_sec": 0, 00:13:45.270 "disable_auto_failback": false, 00:13:45.270 "generate_uuids": false, 00:13:45.270 "transport_tos": 0, 00:13:45.270 "nvme_error_stat": false, 00:13:45.270 "rdma_srq_size": 0, 00:13:45.270 "io_path_stat": false, 00:13:45.270 "allow_accel_sequence": false, 00:13:45.270 "rdma_max_cq_size": 0, 00:13:45.270 "rdma_cm_event_timeout_ms": 0, 00:13:45.270 "dhchap_digests": [ 00:13:45.270 "sha256", 00:13:45.270 "sha384", 00:13:45.270 "sha512" 00:13:45.270 ], 00:13:45.270 "dhchap_dhgroups": [ 00:13:45.270 "null", 00:13:45.270 "ffdhe2048", 00:13:45.270 "ffdhe3072", 00:13:45.270 "ffdhe4096", 00:13:45.270 "ffdhe6144", 00:13:45.270 "ffdhe8192" 00:13:45.270 ] 00:13:45.270 } 00:13:45.270 }, 00:13:45.270 { 00:13:45.270 "method": "bdev_nvme_set_hotplug", 00:13:45.270 "params": { 00:13:45.270 "period_us": 100000, 00:13:45.271 "enable": false 00:13:45.271 } 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "method": "bdev_malloc_create", 00:13:45.271 "params": { 00:13:45.271 "name": "malloc0", 00:13:45.271 "num_blocks": 8192, 00:13:45.271 "block_size": 4096, 00:13:45.271 "physical_block_size": 4096, 00:13:45.271 "uuid": "45b96792-ca13-4e18-a457-89044322244c", 00:13:45.271 "optimal_io_boundary": 0, 00:13:45.271 "md_size": 0, 00:13:45.271 "dif_type": 0, 00:13:45.271 "dif_is_head_of_md": false, 00:13:45.271 "dif_pi_format": 0 00:13:45.271 } 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "method": "bdev_wait_for_examine" 00:13:45.271 } 00:13:45.271 ] 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "subsystem": "scsi", 00:13:45.271 "config": null 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "subsystem": "scheduler", 00:13:45.271 "config": [ 00:13:45.271 { 00:13:45.271 "method": "framework_set_scheduler", 00:13:45.271 "params": { 00:13:45.271 "name": "static" 00:13:45.271 } 00:13:45.271 } 00:13:45.271 ] 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "subsystem": "vhost_scsi", 00:13:45.271 "config": [] 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "subsystem": "vhost_blk", 00:13:45.271 "config": [] 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "subsystem": "ublk", 00:13:45.271 "config": [ 00:13:45.271 { 00:13:45.271 "method": "ublk_create_target", 00:13:45.271 "params": { 00:13:45.271 "cpumask": "1" 00:13:45.271 } 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "method": "ublk_start_disk", 00:13:45.271 "params": { 00:13:45.271 "bdev_name": "malloc0", 00:13:45.271 "ublk_id": 0, 00:13:45.271 "num_queues": 1, 00:13:45.271 "queue_depth": 128 00:13:45.271 } 00:13:45.271 } 00:13:45.271 ] 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "subsystem": "nbd", 00:13:45.271 "config": [] 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "subsystem": "nvmf", 00:13:45.271 "config": [ 00:13:45.271 { 00:13:45.271 "method": "nvmf_set_config", 00:13:45.271 "params": { 00:13:45.271 "discovery_filter": "match_any", 00:13:45.271 "admin_cmd_passthru": { 00:13:45.271 "identify_ctrlr": false 00:13:45.271 } 00:13:45.271 } 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "method": "nvmf_set_max_subsystems", 00:13:45.271 "params": { 00:13:45.271 "max_subsystems": 1024 00:13:45.271 } 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "method": "nvmf_set_crdt", 00:13:45.271 "params": { 00:13:45.271 "crdt1": 0, 00:13:45.271 "crdt2": 0, 00:13:45.271 "crdt3": 0 00:13:45.271 } 00:13:45.271 } 00:13:45.271 ] 00:13:45.271 }, 00:13:45.271 { 00:13:45.271 "subsystem": "iscsi", 00:13:45.271 "config": [ 00:13:45.271 { 00:13:45.271 "method": "iscsi_set_options", 00:13:45.271 "params": { 00:13:45.271 "node_base": "iqn.2016-06.io.spdk", 00:13:45.271 "max_sessions": 128, 00:13:45.271 "max_connections_per_session": 2, 00:13:45.271 "max_queue_depth": 64, 00:13:45.271 "default_time2wait": 2, 00:13:45.271 "default_time2retain": 20, 00:13:45.271 "first_burst_length": 8192, 00:13:45.271 "immediate_data": true, 00:13:45.271 "allow_duplicated_isid": false, 00:13:45.271 "error_recovery_level": 0, 00:13:45.271 "nop_timeout": 60, 00:13:45.271 "nop_in_interval": 30, 00:13:45.271 "disable_chap": false, 00:13:45.271 "require_chap": false, 00:13:45.271 "mutual_chap": false, 00:13:45.271 "chap_group": 0, 00:13:45.271 "max_large_datain_per_connection": 64, 00:13:45.271 "max_r2t_per_connection": 4, 00:13:45.271 "pdu_pool_size": 36864, 00:13:45.271 "immediate_data_pool_size": 16384, 00:13:45.271 "data_out_pool_size": 2048 00:13:45.271 } 00:13:45.271 } 00:13:45.271 ] 00:13:45.271 } 00:13:45.271 ] 00:13:45.271 }' 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 81874 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 81874 ']' 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 81874 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81874 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:45.271 killing process with pid 81874 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81874' 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 81874 00:13:45.271 12:55:36 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 81874 00:13:45.530 [2024-08-11 12:55:36.985070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.530 [2024-08-11 12:55:37.014983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.530 [2024-08-11 12:55:37.015176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.530 [2024-08-11 12:55:37.022911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.530 [2024-08-11 12:55:37.022989] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:45.530 [2024-08-11 12:55:37.023011] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:45.530 [2024-08-11 12:55:37.023049] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:45.530 [2024-08-11 12:55:37.023221] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=81912 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 81912 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 81912 ']' 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:46.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:46.098 12:55:37 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:46.098 "subsystems": [ 00:13:46.098 { 00:13:46.098 "subsystem": "keyring", 00:13:46.098 "config": [] 00:13:46.098 }, 00:13:46.098 { 00:13:46.098 "subsystem": "iobuf", 00:13:46.098 "config": [ 00:13:46.098 { 00:13:46.098 "method": "iobuf_set_options", 00:13:46.098 "params": { 00:13:46.098 "small_pool_count": 8192, 00:13:46.098 "large_pool_count": 1024, 00:13:46.098 "small_bufsize": 8192, 00:13:46.098 "large_bufsize": 135168 00:13:46.098 } 00:13:46.098 } 00:13:46.098 ] 00:13:46.098 }, 00:13:46.098 { 00:13:46.098 "subsystem": "sock", 00:13:46.098 "config": [ 00:13:46.098 { 00:13:46.098 "method": "sock_set_default_impl", 00:13:46.098 "params": { 00:13:46.098 "impl_name": "posix" 00:13:46.098 } 00:13:46.098 }, 00:13:46.098 { 00:13:46.098 "method": "sock_impl_set_options", 00:13:46.098 "params": { 00:13:46.098 "impl_name": "ssl", 00:13:46.098 "recv_buf_size": 4096, 00:13:46.098 "send_buf_size": 4096, 00:13:46.098 "enable_recv_pipe": true, 00:13:46.098 "enable_quickack": false, 00:13:46.098 "enable_placement_id": 0, 00:13:46.098 "enable_zerocopy_send_server": true, 00:13:46.098 "enable_zerocopy_send_client": false, 00:13:46.098 "zerocopy_threshold": 0, 00:13:46.098 "tls_version": 0, 00:13:46.098 "enable_ktls": false 00:13:46.098 } 00:13:46.098 }, 00:13:46.098 { 00:13:46.098 "method": "sock_impl_set_options", 00:13:46.098 "params": { 00:13:46.098 "impl_name": "posix", 00:13:46.098 "recv_buf_size": 2097152, 00:13:46.098 "send_buf_size": 2097152, 00:13:46.098 "enable_recv_pipe": true, 00:13:46.098 "enable_quickack": false, 00:13:46.098 "enable_placement_id": 0, 00:13:46.098 "enable_zerocopy_send_server": true, 00:13:46.098 "enable_zerocopy_send_client": false, 00:13:46.098 "zerocopy_threshold": 0, 00:13:46.098 "tls_version": 0, 00:13:46.098 "enable_ktls": false 00:13:46.098 } 00:13:46.098 } 00:13:46.098 ] 00:13:46.098 }, 00:13:46.098 { 00:13:46.098 "subsystem": "vmd", 00:13:46.098 "config": [] 00:13:46.098 }, 00:13:46.098 { 00:13:46.098 "subsystem": "accel", 00:13:46.098 "config": [ 00:13:46.098 { 00:13:46.098 "method": "accel_set_options", 00:13:46.098 "params": { 00:13:46.098 "small_cache_size": 128, 00:13:46.098 "large_cache_size": 16, 00:13:46.098 "task_count": 2048, 00:13:46.098 "sequence_count": 2048, 00:13:46.098 "buf_count": 2048 00:13:46.098 } 00:13:46.098 } 00:13:46.098 ] 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "bdev", 00:13:46.099 "config": [ 00:13:46.099 { 00:13:46.099 "method": "bdev_set_options", 00:13:46.099 "params": { 00:13:46.099 "bdev_io_pool_size": 65535, 00:13:46.099 "bdev_io_cache_size": 256, 00:13:46.099 "bdev_auto_examine": true, 00:13:46.099 "iobuf_small_cache_size": 128, 00:13:46.099 "iobuf_large_cache_size": 16 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "bdev_raid_set_options", 00:13:46.099 "params": { 00:13:46.099 "process_window_size_kb": 1024, 00:13:46.099 "process_max_bandwidth_mb_sec": 0 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "bdev_iscsi_set_options", 00:13:46.099 "params": { 00:13:46.099 "timeout_sec": 30 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "bdev_nvme_set_options", 00:13:46.099 "params": { 00:13:46.099 "action_on_timeout": "none", 00:13:46.099 "timeout_us": 0, 00:13:46.099 "timeout_admin_us": 0, 00:13:46.099 "keep_alive_timeout_ms": 10000, 00:13:46.099 "arbitration_burst": 0, 00:13:46.099 "low_priority_weight": 0, 00:13:46.099 "medium_priority_weight": 0, 00:13:46.099 "high_priority_weight": 0, 00:13:46.099 "nvme_adminq_poll_period_us": 10000, 00:13:46.099 "nvme_ioq_poll_period_us": 0, 00:13:46.099 "io_queue_requests": 0, 00:13:46.099 "delay_cmd_submit": true, 00:13:46.099 "transport_retry_count": 4, 00:13:46.099 "bdev_retry_count": 3, 00:13:46.099 "transport_ack_timeout": 0, 00:13:46.099 "ctrlr_loss_timeout_sec": 0, 00:13:46.099 "reconnect_delay_sec": 0, 00:13:46.099 "fast_io_fail_timeout_sec": 0, 00:13:46.099 "disable_auto_failback": false, 00:13:46.099 "generate_uuids": false, 00:13:46.099 "transport_tos": 0, 00:13:46.099 "nvme_error_stat": false, 00:13:46.099 "rdma_srq_size": 0, 00:13:46.099 "io_path_stat": false, 00:13:46.099 "allow_accel_sequence": false, 00:13:46.099 "rdma_max_cq_size": 0, 00:13:46.099 "rdma_cm_event_timeout_ms": 0, 00:13:46.099 "dhchap_digests": [ 00:13:46.099 "sha256", 00:13:46.099 "sha384", 00:13:46.099 "sha512" 00:13:46.099 ], 00:13:46.099 "dhchap_dhgroups": [ 00:13:46.099 "null", 00:13:46.099 "ffdhe2048", 00:13:46.099 "ffdhe3072", 00:13:46.099 "ffdhe4096", 00:13:46.099 "ffdhe6144", 00:13:46.099 "ffdhe8192" 00:13:46.099 ] 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "bdev_nvme_set_hotplug", 00:13:46.099 "params": { 00:13:46.099 "period_us": 100000, 00:13:46.099 "enable": false 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "bdev_malloc_create", 00:13:46.099 "params": { 00:13:46.099 "name": "malloc0", 00:13:46.099 "num_blocks": 8192, 00:13:46.099 "block_size": 4096, 00:13:46.099 "physical_block_size": 4096, 00:13:46.099 "uuid": "45b96792-ca13-4e18-a457-89044322244c", 00:13:46.099 "optimal_io_boundary": 0, 00:13:46.099 "md_size": 0, 00:13:46.099 "dif_type": 0, 00:13:46.099 "dif_is_head_of_md": false, 00:13:46.099 "dif_pi_format": 0 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "bdev_wait_for_examine" 00:13:46.099 } 00:13:46.099 ] 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "scsi", 00:13:46.099 "config": null 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "scheduler", 00:13:46.099 "config": [ 00:13:46.099 { 00:13:46.099 "method": "framework_set_scheduler", 00:13:46.099 "params": { 00:13:46.099 "name": "static" 00:13:46.099 } 00:13:46.099 } 00:13:46.099 ] 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "vhost_scsi", 00:13:46.099 "config": [] 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "vhost_blk", 00:13:46.099 "config": [] 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "ublk", 00:13:46.099 "config": [ 00:13:46.099 { 00:13:46.099 "method": "ublk_create_target", 00:13:46.099 "params": { 00:13:46.099 "cpumask": "1" 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "ublk_start_disk", 00:13:46.099 "params": { 00:13:46.099 "bdev_name": "malloc0", 00:13:46.099 "ublk_id": 0, 00:13:46.099 "num_queues": 1, 00:13:46.099 "queue_depth": 128 00:13:46.099 } 00:13:46.099 } 00:13:46.099 ] 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "nbd", 00:13:46.099 "config": [] 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "nvmf", 00:13:46.099 "config": [ 00:13:46.099 { 00:13:46.099 "method": "nvmf_set_config", 00:13:46.099 "params": { 00:13:46.099 "discovery_filter": "match_any", 00:13:46.099 "admin_cmd_passthru": { 00:13:46.099 "identify_ctrlr": false 00:13:46.099 } 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "nvmf_set_max_subsystems", 00:13:46.099 "params": { 00:13:46.099 "max_subsystems": 1024 00:13:46.099 } 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "method": "nvmf_set_crdt", 00:13:46.099 "params": { 00:13:46.099 "crdt1": 0, 00:13:46.099 "crdt2": 0, 00:13:46.099 "crdt3": 0 00:13:46.099 } 00:13:46.099 } 00:13:46.099 ] 00:13:46.099 }, 00:13:46.099 { 00:13:46.099 "subsystem": "iscsi", 00:13:46.099 "config": [ 00:13:46.099 { 00:13:46.099 "method": "iscsi_set_options", 00:13:46.099 "params": { 00:13:46.099 "node_base": "iqn.2016-06.io.spdk", 00:13:46.099 "max_sessions": 128, 00:13:46.099 "max_connections_per_session": 2, 00:13:46.099 "max_queue_depth": 64, 00:13:46.099 "default_time2wait": 2, 00:13:46.099 "default_time2retain": 20, 00:13:46.099 "first_burst_length": 8192, 00:13:46.099 "immediate_data": true, 00:13:46.099 "allow_duplicated_isid": false, 00:13:46.099 "error_recovery_level": 0, 00:13:46.099 "nop_timeout": 60, 00:13:46.099 "nop_in_interval": 30, 00:13:46.099 "disable_chap": false, 00:13:46.099 "require_chap": false, 00:13:46.099 "mutual_chap": false, 00:13:46.099 "chap_group": 0, 00:13:46.099 "max_large_datain_per_connection": 64, 00:13:46.099 "max_r2t_per_connection": 4, 00:13:46.099 "pdu_pool_size": 36864, 00:13:46.099 "immediate_data_pool_size": 16384, 00:13:46.099 "data_out_pool_size": 2048 00:13:46.099 } 00:13:46.099 } 00:13:46.099 ] 00:13:46.099 } 00:13:46.099 ] 00:13:46.099 }' 00:13:46.099 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:46.099 [2024-08-11 12:55:37.509785] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:46.099 [2024-08-11 12:55:37.509974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81912 ] 00:13:46.099 [2024-08-11 12:55:37.658392] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.359 [2024-08-11 12:55:37.702891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.617 [2024-08-11 12:55:37.992891] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:46.617 [2024-08-11 12:55:37.993222] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:46.617 [2024-08-11 12:55:38.001026] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:46.617 [2024-08-11 12:55:38.001112] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:46.617 [2024-08-11 12:55:38.001127] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:46.617 [2024-08-11 12:55:38.001135] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:46.617 [2024-08-11 12:55:38.009968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:46.617 [2024-08-11 12:55:38.009994] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:46.617 [2024-08-11 12:55:38.016935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:46.617 [2024-08-11 12:55:38.017054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:46.617 [2024-08-11 12:55:38.033898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:46.876 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:46.876 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:13:46.876 12:55:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:46.876 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:46.876 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:46.876 12:55:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 81912 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 81912 ']' 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 81912 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81912 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:47.135 killing process with pid 81912 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81912' 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 81912 00:13:47.135 12:55:38 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 81912 00:13:47.394 [2024-08-11 12:55:38.744155] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:47.394 [2024-08-11 12:55:38.781925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:47.394 [2024-08-11 12:55:38.782081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:47.394 [2024-08-11 12:55:38.789909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:47.394 [2024-08-11 12:55:38.789975] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:47.394 [2024-08-11 12:55:38.789988] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:47.394 [2024-08-11 12:55:38.790023] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:47.394 [2024-08-11 12:55:38.790211] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:47.652 12:55:39 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:47.652 00:13:47.652 real 0m3.880s 00:13:47.652 user 0m3.209s 00:13:47.652 sys 0m1.708s 00:13:47.652 12:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:47.652 12:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:47.652 ************************************ 00:13:47.652 END TEST test_save_ublk_config 00:13:47.652 ************************************ 00:13:47.652 12:55:39 ublk -- ublk/ublk.sh@139 -- # spdk_pid=81963 00:13:47.652 12:55:39 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.652 12:55:39 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:47.652 12:55:39 ublk -- ublk/ublk.sh@141 -- # waitforlisten 81963 00:13:47.652 12:55:39 ublk -- common/autotest_common.sh@827 -- # '[' -z 81963 ']' 00:13:47.652 12:55:39 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.652 12:55:39 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:47.652 12:55:39 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.652 12:55:39 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:47.652 12:55:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.910 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:47.910 [2024-08-11 12:55:39.297086] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:47.910 [2024-08-11 12:55:39.297254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81963 ] 00:13:47.910 [2024-08-11 12:55:39.442276] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:47.910 [2024-08-11 12:55:39.484839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.910 [2024-08-11 12:55:39.484919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.169 12:55:39 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:48.169 12:55:39 ublk -- common/autotest_common.sh@860 -- # return 0 00:13:48.169 12:55:39 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:48.169 12:55:39 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:48.169 12:55:39 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.169 12:55:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.169 ************************************ 00:13:48.169 START TEST test_create_ublk 00:13:48.169 ************************************ 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:13:48.169 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.169 [2024-08-11 12:55:39.677903] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:48.169 [2024-08-11 12:55:39.679262] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:48.169 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:48.169 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:48.169 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:48.169 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:48.169 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.169 [2024-08-11 12:55:39.750079] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:48.169 [2024-08-11 12:55:39.750680] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:48.169 [2024-08-11 12:55:39.750724] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:48.169 [2024-08-11 12:55:39.750738] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:48.169 [2024-08-11 12:55:39.758212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:48.169 [2024-08-11 12:55:39.758259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:48.169 [2024-08-11 12:55:39.765924] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:48.428 [2024-08-11 12:55:39.781012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:48.428 [2024-08-11 12:55:39.797032] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:48.428 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:48.428 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:48.428 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.428 12:55:39 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:48.428 { 00:13:48.428 "ublk_device": "/dev/ublkb0", 00:13:48.428 "id": 0, 00:13:48.428 "queue_depth": 512, 00:13:48.428 "num_queues": 4, 00:13:48.428 "bdev_name": "Malloc0" 00:13:48.428 } 00:13:48.428 ]' 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:48.428 12:55:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:48.428 12:55:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:48.687 12:55:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:48.687 12:55:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:48.687 12:55:40 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:48.687 12:55:40 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:48.687 fio: verification read phase will never start because write phase uses all of runtime 00:13:48.687 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:48.687 fio-3.35 00:13:48.687 Starting 1 process 00:14:00.886 00:14:00.886 fio_test: (groupid=0, jobs=1): err= 0: pid=82001: Sun Aug 11 12:55:50 2024 00:14:00.886 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(455MiB/10001msec); 0 zone resets 00:14:00.886 clat (usec): min=54, max=4313, avg=84.37, stdev=130.14 00:14:00.886 lat (usec): min=55, max=4335, avg=85.17, stdev=130.16 00:14:00.886 clat percentiles (usec): 00:14:00.886 | 1.00th=[ 62], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 71], 00:14:00.886 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 77], 00:14:00.886 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 103], 00:14:00.886 | 99.00th=[ 127], 99.50th=[ 141], 99.90th=[ 2737], 99.95th=[ 3195], 00:14:00.886 | 99.99th=[ 3752] 00:14:00.886 bw ( KiB/s): min=44648, max=49312, per=100.00%, avg=46630.58, stdev=880.20, samples=19 00:14:00.886 iops : min=11162, max=12328, avg=11657.58, stdev=220.06, samples=19 00:14:00.886 lat (usec) : 100=93.96%, 250=5.71%, 500=0.01%, 750=0.01%, 1000=0.02% 00:14:00.886 lat (msec) : 2=0.11%, 4=0.17%, 10=0.01% 00:14:00.886 cpu : usr=3.49%, sys=8.87%, ctx=116444, majf=0, minf=796 00:14:00.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.886 issued rwts: total=0,116444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.886 00:14:00.886 Run status group 0 (all jobs): 00:14:00.886 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=455MiB (477MB), run=10001-10001msec 00:14:00.886 00:14:00.886 Disk stats (read/write): 00:14:00.886 ublkb0: ios=0/115201, merge=0/0, ticks=0/8762, in_queue=8763, util=99.11% 00:14:00.886 12:55:50 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:00.886 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.886 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.886 [2024-08-11 12:55:50.324435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.886 [2024-08-11 12:55:50.353463] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.886 [2024-08-11 12:55:50.354558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.886 [2024-08-11 12:55:50.362088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.886 [2024-08-11 12:55:50.362524] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:00.886 [2024-08-11 12:55:50.362548] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@646 -- # local es=0 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@649 -- # rpc_cmd ublk_stop_disk 0 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 [2024-08-11 12:55:50.377076] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:00.887 request: 00:14:00.887 { 00:14:00.887 "ublk_id": 0, 00:14:00.887 "method": "ublk_stop_disk", 00:14:00.887 "req_id": 1 00:14:00.887 } 00:14:00.887 Got JSON-RPC error response 00:14:00.887 response: 00:14:00.887 { 00:14:00.887 "code": -19, 00:14:00.887 "message": "No such device" 00:14:00.887 } 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@649 -- # es=1 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:14:00.887 12:55:50 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 [2024-08-11 12:55:50.393126] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:00.887 [2024-08-11 12:55:50.394985] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:00.887 [2024-08-11 12:55:50.395032] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:00.887 12:55:50 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:00.887 12:55:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:00.887 12:55:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:00.887 12:55:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:00.887 12:55:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:00.887 12:55:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:00.887 00:14:00.887 real 0m10.942s 00:14:00.887 user 0m0.788s 00:14:00.887 sys 0m1.001s 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:00.887 ************************************ 00:14:00.887 END TEST test_create_ublk 00:14:00.887 12:55:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 ************************************ 00:14:00.887 12:55:50 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:00.887 12:55:50 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:00.887 12:55:50 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:00.887 12:55:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 ************************************ 00:14:00.887 START TEST test_create_multi_ublk 00:14:00.887 ************************************ 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 [2024-08-11 12:55:50.677033] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:00.887 [2024-08-11 12:55:50.678289] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 [2024-08-11 12:55:50.761348] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:00.887 [2024-08-11 12:55:50.761856] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:00.887 [2024-08-11 12:55:50.761923] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:00.887 [2024-08-11 12:55:50.761938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.887 [2024-08-11 12:55:50.773157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.887 [2024-08-11 12:55:50.773200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.887 [2024-08-11 12:55:50.785009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.887 [2024-08-11 12:55:50.785816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:00.887 [2024-08-11 12:55:50.794457] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 [2024-08-11 12:55:50.877145] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:00.887 [2024-08-11 12:55:50.877676] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:00.887 [2024-08-11 12:55:50.877700] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:00.887 [2024-08-11 12:55:50.877710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.887 [2024-08-11 12:55:50.889032] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.887 [2024-08-11 12:55:50.889058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.887 [2024-08-11 12:55:50.901035] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.887 [2024-08-11 12:55:50.901779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:00.887 [2024-08-11 12:55:50.936980] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.887 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:00.887 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:00.887 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.887 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.887 [2024-08-11 12:55:51.033095] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:00.887 [2024-08-11 12:55:51.033649] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:00.887 [2024-08-11 12:55:51.033672] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:00.887 [2024-08-11 12:55:51.033685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.887 [2024-08-11 12:55:51.041071] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.887 [2024-08-11 12:55:51.041101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.888 [2024-08-11 12:55:51.052909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.888 [2024-08-11 12:55:51.053632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:00.888 [2024-08-11 12:55:51.092911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.888 [2024-08-11 12:55:51.188076] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:00.888 [2024-08-11 12:55:51.188571] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:00.888 [2024-08-11 12:55:51.188600] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:00.888 [2024-08-11 12:55:51.188611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.888 [2024-08-11 12:55:51.199927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.888 [2024-08-11 12:55:51.199955] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.888 [2024-08-11 12:55:51.210951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.888 [2024-08-11 12:55:51.211727] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:00.888 [2024-08-11 12:55:51.223944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:00.888 { 00:14:00.888 "ublk_device": "/dev/ublkb0", 00:14:00.888 "id": 0, 00:14:00.888 "queue_depth": 512, 00:14:00.888 "num_queues": 4, 00:14:00.888 "bdev_name": "Malloc0" 00:14:00.888 }, 00:14:00.888 { 00:14:00.888 "ublk_device": "/dev/ublkb1", 00:14:00.888 "id": 1, 00:14:00.888 "queue_depth": 512, 00:14:00.888 "num_queues": 4, 00:14:00.888 "bdev_name": "Malloc1" 00:14:00.888 }, 00:14:00.888 { 00:14:00.888 "ublk_device": "/dev/ublkb2", 00:14:00.888 "id": 2, 00:14:00.888 "queue_depth": 512, 00:14:00.888 "num_queues": 4, 00:14:00.888 "bdev_name": "Malloc2" 00:14:00.888 }, 00:14:00.888 { 00:14:00.888 "ublk_device": "/dev/ublkb3", 00:14:00.888 "id": 3, 00:14:00.888 "queue_depth": 512, 00:14:00.888 "num_queues": 4, 00:14:00.888 "bdev_name": "Malloc3" 00:14:00.888 } 00:14:00.888 ]' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:00.888 12:55:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.888 [2024-08-11 12:55:52.312196] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.888 [2024-08-11 12:55:52.349337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.888 [2024-08-11 12:55:52.350547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.888 [2024-08-11 12:55:52.356914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.888 [2024-08-11 12:55:52.357240] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:00.888 [2024-08-11 12:55:52.357266] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.888 [2024-08-11 12:55:52.372011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.888 [2024-08-11 12:55:52.423492] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.888 [2024-08-11 12:55:52.424695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.888 [2024-08-11 12:55:52.434004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.888 [2024-08-11 12:55:52.434336] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:00.888 [2024-08-11 12:55:52.434357] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:00.888 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:00.889 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.889 [2024-08-11 12:55:52.448998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:01.148 [2024-08-11 12:55:52.493956] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:01.148 [2024-08-11 12:55:52.494972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:01.148 [2024-08-11 12:55:52.500918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:01.148 [2024-08-11 12:55:52.501238] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:01.148 [2024-08-11 12:55:52.501263] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:01.148 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:01.148 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:01.148 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:01.148 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:01.148 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.148 [2024-08-11 12:55:52.515971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:01.148 [2024-08-11 12:55:52.556936] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:01.148 [2024-08-11 12:55:52.557800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:01.148 [2024-08-11 12:55:52.563899] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:01.148 [2024-08-11 12:55:52.564234] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:01.148 [2024-08-11 12:55:52.564260] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:01.148 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:01.148 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:01.407 [2024-08-11 12:55:52.849172] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:01.407 [2024-08-11 12:55:52.850370] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:01.407 [2024-08-11 12:55:52.850415] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:01.407 12:55:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:01.667 12:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:01.926 ************************************ 00:14:01.926 END TEST test_create_multi_ublk 00:14:01.926 ************************************ 00:14:01.926 12:55:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:01.926 00:14:01.926 real 0m2.631s 00:14:01.926 user 0m1.320s 00:14:01.926 sys 0m0.177s 00:14:01.926 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:01.926 12:55:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.926 12:55:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:01.926 12:55:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:01.926 12:55:53 ublk -- ublk/ublk.sh@130 -- # killprocess 81963 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@946 -- # '[' -z 81963 ']' 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@950 -- # kill -0 81963 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@951 -- # uname 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81963 00:14:01.926 killing process with pid 81963 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81963' 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@965 -- # kill 81963 00:14:01.926 12:55:53 ublk -- common/autotest_common.sh@970 -- # wait 81963 00:14:02.184 [2024-08-11 12:55:53.555535] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:02.184 [2024-08-11 12:55:53.555630] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:02.184 00:14:02.184 real 0m18.574s 00:14:02.184 user 0m28.951s 00:14:02.184 sys 0m8.213s 00:14:02.184 12:55:53 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.184 ************************************ 00:14:02.184 END TEST ublk 00:14:02.184 ************************************ 00:14:02.184 12:55:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.443 12:55:53 -- spdk/autotest.sh@261 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:02.443 12:55:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:02.443 12:55:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.443 12:55:53 -- common/autotest_common.sh@10 -- # set +x 00:14:02.443 ************************************ 00:14:02.443 START TEST ublk_recovery 00:14:02.443 ************************************ 00:14:02.443 12:55:53 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:02.443 * Looking for test storage... 00:14:02.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:02.443 12:55:53 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:02.443 12:55:53 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:02.443 12:55:53 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:02.443 12:55:53 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:02.443 12:55:53 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:02.443 12:55:53 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:02.443 12:55:53 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:02.443 12:55:53 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:02.443 12:55:53 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:02.443 12:55:53 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:02.443 12:55:53 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82314 00:14:02.443 12:55:53 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:02.443 12:55:53 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.443 12:55:53 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82314 00:14:02.443 12:55:53 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 82314 ']' 00:14:02.443 12:55:53 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.443 12:55:53 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.443 12:55:53 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.443 12:55:53 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.443 12:55:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.443 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:02.443 [2024-08-11 12:55:53.986510] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:02.443 [2024-08-11 12:55:53.986798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82314 ] 00:14:02.703 [2024-08-11 12:55:54.125472] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:02.703 [2024-08-11 12:55:54.164543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.703 [2024-08-11 12:55:54.164586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:03.638 12:55:54 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.638 [2024-08-11 12:55:54.947021] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:03.638 [2024-08-11 12:55:54.948336] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:03.638 12:55:54 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.638 malloc0 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:03.638 12:55:54 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:03.638 12:55:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.638 [2024-08-11 12:55:54.989131] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:03.638 [2024-08-11 12:55:54.989297] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:03.638 [2024-08-11 12:55:54.989312] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:03.638 [2024-08-11 12:55:54.989320] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:03.638 [2024-08-11 12:55:54.993285] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:03.638 [2024-08-11 12:55:54.993344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:03.638 [2024-08-11 12:55:55.001011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:03.638 [2024-08-11 12:55:55.001189] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:03.638 [2024-08-11 12:55:55.009425] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:03.638 1 00:14:03.638 12:55:55 ublk_recovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:03.638 12:55:55 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:04.575 12:55:56 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82347 00:14:04.575 12:55:56 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:04.575 12:55:56 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:04.575 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:04.575 fio-3.35 00:14:04.575 Starting 1 process 00:14:09.851 12:56:01 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82314 00:14:09.851 12:56:01 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:15.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.154 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82314 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:15.154 12:56:06 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82458 00:14:15.154 12:56:06 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:15.154 12:56:06 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:15.154 12:56:06 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82458 00:14:15.154 12:56:06 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 82458 ']' 00:14:15.154 12:56:06 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.154 12:56:06 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:15.154 12:56:06 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.154 12:56:06 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:15.154 12:56:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.154 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:15.154 [2024-08-11 12:56:06.148665] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:15.154 [2024-08-11 12:56:06.149053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82458 ] 00:14:15.154 [2024-08-11 12:56:06.301816] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:15.154 [2024-08-11 12:56:06.351698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.154 [2024-08-11 12:56:06.351699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.721 12:56:07 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.721 12:56:07 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:15.721 12:56:07 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:15.721 12:56:07 ublk_recovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:15.721 12:56:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.722 [2024-08-11 12:56:07.117962] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:15.722 [2024-08-11 12:56:07.119269] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:15.722 12:56:07 ublk_recovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:15.722 12:56:07 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:15.722 12:56:07 ublk_recovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:15.722 12:56:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.722 malloc0 00:14:15.722 12:56:07 ublk_recovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:15.722 12:56:07 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:15.722 12:56:07 ublk_recovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:15.722 12:56:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:15.722 [2024-08-11 12:56:07.156087] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:15.722 [2024-08-11 12:56:07.156142] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:15.722 [2024-08-11 12:56:07.156171] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:15.722 [2024-08-11 12:56:07.162082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:15.722 [2024-08-11 12:56:07.162127] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:15.722 1 00:14:15.722 12:56:07 ublk_recovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:15.722 12:56:07 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82347 00:14:16.656 [2024-08-11 12:56:08.162153] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:16.656 [2024-08-11 12:56:08.167066] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:16.656 [2024-08-11 12:56:08.167089] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:17.593 [2024-08-11 12:56:09.167129] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:17.593 [2024-08-11 12:56:09.177000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:17.593 [2024-08-11 12:56:09.177152] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:18.968 [2024-08-11 12:56:10.177220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:18.968 [2024-08-11 12:56:10.184987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:18.968 [2024-08-11 12:56:10.185012] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:18.968 [2024-08-11 12:56:10.185026] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:18.968 [2024-08-11 12:56:10.185113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:40.908 [2024-08-11 12:56:31.217101] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:40.908 [2024-08-11 12:56:31.225382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:40.908 [2024-08-11 12:56:31.233249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:40.908 [2024-08-11 12:56:31.233435] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:07.458 00:15:07.458 fio_test: (groupid=0, jobs=1): err= 0: pid=82350: Sun Aug 11 12:56:56 2024 00:15:07.458 read: IOPS=10.8k, BW=42.3MiB/s (44.3MB/s)(2537MiB/60002msec) 00:15:07.458 slat (usec): min=2, max=1435, avg= 6.24, stdev= 4.31 00:15:07.458 clat (usec): min=1076, max=30224k, avg=5947.19, stdev=304603.82 00:15:07.458 lat (usec): min=1382, max=30224k, avg=5953.43, stdev=304603.83 00:15:07.458 clat percentiles (msec): 00:15:07.458 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:15:07.458 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:15:07.458 | 70.00th=[ 3], 80.00th=[ 3], 90.00th=[ 4], 95.00th=[ 4], 00:15:07.458 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 11], 00:15:07.458 | 99.99th=[17113] 00:15:07.458 bw ( KiB/s): min= 1634, max=93648, per=100.00%, avg=85290.82, stdev=14561.29, samples=60 00:15:07.458 iops : min= 408, max=23412, avg=21322.68, stdev=3640.37, samples=60 00:15:07.458 write: IOPS=10.8k, BW=42.3MiB/s (44.3MB/s)(2535MiB/60002msec); 0 zone resets 00:15:07.458 slat (usec): min=2, max=1146, avg= 6.39, stdev= 3.82 00:15:07.458 clat (usec): min=1086, max=30224k, avg=5866.47, stdev=295353.31 00:15:07.458 lat (usec): min=1094, max=30224k, avg=5872.86, stdev=295353.30 00:15:07.458 clat percentiles (usec): 00:15:07.458 | 1.00th=[ 2376], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2671], 00:15:07.458 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2900], 00:15:07.458 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 3916], 00:15:07.458 | 99.00th=[ 6063], 99.50th=[ 6652], 99.90th=[ 8979], 99.95th=[10421], 00:15:07.458 | 99.99th=[13829] 00:15:07.458 bw ( KiB/s): min= 1500, max=93912, per=100.00%, avg=85237.38, stdev=14630.93, samples=60 00:15:07.458 iops : min= 375, max=23478, avg=21309.33, stdev=3657.73, samples=60 00:15:07.458 lat (msec) : 2=0.17%, 4=95.00%, 10=4.77%, 20=0.04%, >=2000=0.01% 00:15:07.458 cpu : usr=6.09%, sys=12.51%, ctx=39358, majf=0, minf=13 00:15:07.458 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:07.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.458 issued rwts: total=649581,649037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.458 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.458 00:15:07.458 Run status group 0 (all jobs): 00:15:07.458 READ: bw=42.3MiB/s (44.3MB/s), 42.3MiB/s-42.3MiB/s (44.3MB/s-44.3MB/s), io=2537MiB (2661MB), run=60002-60002msec 00:15:07.458 WRITE: bw=42.3MiB/s (44.3MB/s), 42.3MiB/s-42.3MiB/s (44.3MB/s-44.3MB/s), io=2535MiB (2658MB), run=60002-60002msec 00:15:07.458 00:15:07.458 Disk stats (read/write): 00:15:07.458 ublkb1: ios=647020/646505, merge=0/0, ticks=3796638/3669788, in_queue=7466426, util=99.93% 00:15:07.458 12:56:56 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.458 [2024-08-11 12:56:56.277949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:07.458 [2024-08-11 12:56:56.325059] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:07.458 [2024-08-11 12:56:56.325229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:07.458 [2024-08-11 12:56:56.329388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:07.458 [2024-08-11 12:56:56.329690] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:07.458 [2024-08-11 12:56:56.333012] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:07.458 12:56:56 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.458 [2024-08-11 12:56:56.343014] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:07.458 [2024-08-11 12:56:56.344428] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:07.458 [2024-08-11 12:56:56.344487] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:07.458 12:56:56 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:07.458 12:56:56 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:07.458 12:56:56 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82458 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 82458 ']' 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 82458 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82458 00:15:07.458 killing process with pid 82458 00:15:07.458 12:56:56 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:07.459 12:56:56 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:07.459 12:56:56 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82458' 00:15:07.459 12:56:56 ublk_recovery -- common/autotest_common.sh@965 -- # kill 82458 00:15:07.459 12:56:56 ublk_recovery -- common/autotest_common.sh@970 -- # wait 82458 00:15:07.459 [2024-08-11 12:56:56.587436] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:07.459 [2024-08-11 12:56:56.587541] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:07.459 00:15:07.459 real 1m3.158s 00:15:07.459 user 1m46.932s 00:15:07.459 sys 0m20.293s 00:15:07.459 12:56:56 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:07.459 ************************************ 00:15:07.459 END TEST ublk_recovery 00:15:07.459 ************************************ 00:15:07.459 12:56:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.459 12:56:57 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@269 -- # timing_exit lib 00:15:07.459 12:56:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:07.459 12:56:57 -- common/autotest_common.sh@10 -- # set +x 00:15:07.459 12:56:57 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@285 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@323 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@332 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@350 -- # '[' 1 -eq 1 ']' 00:15:07.459 12:56:57 -- spdk/autotest.sh@351 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:07.459 12:56:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:07.459 12:56:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:07.459 12:56:57 -- common/autotest_common.sh@10 -- # set +x 00:15:07.459 ************************************ 00:15:07.459 START TEST ftl 00:15:07.459 ************************************ 00:15:07.459 12:56:57 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:07.459 * Looking for test storage... 00:15:07.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:07.459 12:56:57 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:07.459 12:56:57 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:07.459 12:56:57 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:07.459 12:56:57 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:07.459 12:56:57 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:07.459 12:56:57 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.459 12:56:57 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:07.459 12:56:57 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:07.459 12:56:57 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:07.459 12:56:57 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:07.459 12:56:57 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:07.459 12:56:57 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:07.459 12:56:57 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:07.459 12:56:57 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:07.459 12:56:57 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:07.459 12:56:57 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:07.459 12:56:57 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:07.459 12:56:57 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:07.459 12:56:57 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:07.459 12:56:57 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:07.459 12:56:57 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:07.459 12:56:57 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:07.459 12:56:57 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:07.459 12:56:57 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:07.459 12:56:57 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:07.459 12:56:57 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:07.459 12:56:57 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:07.459 12:56:57 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:07.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:07.459 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:07.459 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:07.459 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:07.459 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83245 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:07.459 12:56:57 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83245 00:15:07.459 12:56:57 ftl -- common/autotest_common.sh@827 -- # '[' -z 83245 ']' 00:15:07.459 12:56:57 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.459 12:56:57 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:07.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.459 12:56:57 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.459 12:56:57 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:07.459 12:56:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:07.459 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:15:07.459 [2024-08-11 12:56:57.793864] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:15:07.459 [2024-08-11 12:56:57.794098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83245 ] 00:15:07.459 [2024-08-11 12:56:57.942781] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.459 [2024-08-11 12:56:57.979464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.459 12:56:58 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:07.459 12:56:58 ftl -- common/autotest_common.sh@860 -- # return 0 00:15:07.459 12:56:58 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:07.459 12:56:58 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:08.027 12:56:59 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:08.027 12:56:59 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:08.286 12:56:59 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:08.286 12:56:59 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:08.286 12:56:59 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:08.544 12:57:00 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:08.544 12:57:00 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:08.544 12:57:00 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:08.544 12:57:00 ftl -- ftl/ftl.sh@50 -- # break 00:15:08.544 12:57:00 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:08.544 12:57:00 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:08.803 12:57:00 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:08.803 12:57:00 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:09.062 12:57:00 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:09.062 12:57:00 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:09.062 12:57:00 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:09.062 12:57:00 ftl -- ftl/ftl.sh@63 -- # break 00:15:09.062 12:57:00 ftl -- ftl/ftl.sh@66 -- # killprocess 83245 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@946 -- # '[' -z 83245 ']' 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@950 -- # kill -0 83245 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@951 -- # uname 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83245 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:09.062 killing process with pid 83245 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83245' 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@965 -- # kill 83245 00:15:09.062 12:57:00 ftl -- common/autotest_common.sh@970 -- # wait 83245 00:15:09.321 12:57:00 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:09.321 12:57:00 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:09.321 12:57:00 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:09.321 12:57:00 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:09.321 12:57:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:09.321 ************************************ 00:15:09.321 START TEST ftl_fio_basic 00:15:09.321 ************************************ 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:09.321 * Looking for test storage... 00:15:09.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:09.321 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83363 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83363 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 83363 ']' 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:09.322 12:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:09.580 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:15:09.580 [2024-08-11 12:57:00.960913] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:15:09.580 [2024-08-11 12:57:00.961103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83363 ] 00:15:09.580 [2024-08-11 12:57:01.109690] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.580 [2024-08-11 12:57:01.147665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.580 [2024-08-11 12:57:01.147741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.580 [2024-08-11 12:57:01.147702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.526 12:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:10.526 12:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:15:10.526 12:57:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:10.526 12:57:01 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:10.526 12:57:01 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:10.526 12:57:01 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:10.526 12:57:01 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:10.526 12:57:01 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:10.785 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:10.785 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:10.785 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:10.785 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:15:10.785 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:10.785 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:10.785 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:10.785 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:11.044 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:11.044 { 00:15:11.044 "name": "nvme0n1", 00:15:11.044 "aliases": [ 00:15:11.044 "b6030354-4eee-4712-8827-aedebd2213f7" 00:15:11.044 ], 00:15:11.044 "product_name": "NVMe disk", 00:15:11.044 "block_size": 4096, 00:15:11.044 "num_blocks": 1310720, 00:15:11.044 "uuid": "b6030354-4eee-4712-8827-aedebd2213f7", 00:15:11.044 "assigned_rate_limits": { 00:15:11.044 "rw_ios_per_sec": 0, 00:15:11.044 "rw_mbytes_per_sec": 0, 00:15:11.044 "r_mbytes_per_sec": 0, 00:15:11.044 "w_mbytes_per_sec": 0 00:15:11.044 }, 00:15:11.044 "claimed": false, 00:15:11.044 "zoned": false, 00:15:11.044 "supported_io_types": { 00:15:11.044 "read": true, 00:15:11.044 "write": true, 00:15:11.044 "unmap": true, 00:15:11.044 "flush": true, 00:15:11.044 "reset": true, 00:15:11.044 "nvme_admin": true, 00:15:11.044 "nvme_io": true, 00:15:11.044 "nvme_io_md": false, 00:15:11.044 "write_zeroes": true, 00:15:11.044 "zcopy": false, 00:15:11.044 "get_zone_info": false, 00:15:11.044 "zone_management": false, 00:15:11.044 "zone_append": false, 00:15:11.044 "compare": true, 00:15:11.044 "compare_and_write": false, 00:15:11.044 "abort": true, 00:15:11.044 "seek_hole": false, 00:15:11.044 "seek_data": false, 00:15:11.044 "copy": true, 00:15:11.044 "nvme_iov_md": false 00:15:11.044 }, 00:15:11.044 "driver_specific": { 00:15:11.044 "nvme": [ 00:15:11.044 { 00:15:11.044 "pci_address": "0000:00:11.0", 00:15:11.044 "trid": { 00:15:11.044 "trtype": "PCIe", 00:15:11.044 "traddr": "0000:00:11.0" 00:15:11.044 }, 00:15:11.044 "ctrlr_data": { 00:15:11.044 "cntlid": 0, 00:15:11.044 "vendor_id": "0x1b36", 00:15:11.044 "model_number": "QEMU NVMe Ctrl", 00:15:11.044 "serial_number": "12341", 00:15:11.044 "firmware_revision": "8.0.0", 00:15:11.044 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:11.044 "oacs": { 00:15:11.044 "security": 0, 00:15:11.044 "format": 1, 00:15:11.044 "firmware": 0, 00:15:11.044 "ns_manage": 1 00:15:11.044 }, 00:15:11.044 "multi_ctrlr": false, 00:15:11.044 "ana_reporting": false 00:15:11.044 }, 00:15:11.044 "vs": { 00:15:11.044 "nvme_version": "1.4" 00:15:11.044 }, 00:15:11.044 "ns_data": { 00:15:11.044 "id": 1, 00:15:11.044 "can_share": false 00:15:11.044 } 00:15:11.044 } 00:15:11.044 ], 00:15:11.044 "mp_policy": "active_passive" 00:15:11.044 } 00:15:11.044 } 00:15:11.044 ]' 00:15:11.044 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:11.044 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:11.044 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:11.303 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:15:11.303 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:15:11.303 12:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:15:11.303 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:11.303 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:11.303 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:11.303 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:11.303 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:11.568 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:11.568 12:57:02 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:11.568 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5801bcce-ffbf-424d-92cc-384036302784 00:15:11.568 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5801bcce-ffbf-424d-92cc-384036302784 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=48c18944-342d-4d32-84f8-e18750e065da 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 48c18944-342d-4d32-84f8-e18750e065da 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=48c18944-342d-4d32-84f8-e18750e065da 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 48c18944-342d-4d32-84f8-e18750e065da 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=48c18944-342d-4d32-84f8-e18750e065da 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:11.827 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48c18944-342d-4d32-84f8-e18750e065da 00:15:12.087 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:12.087 { 00:15:12.087 "name": "48c18944-342d-4d32-84f8-e18750e065da", 00:15:12.087 "aliases": [ 00:15:12.087 "lvs/nvme0n1p0" 00:15:12.087 ], 00:15:12.087 "product_name": "Logical Volume", 00:15:12.087 "block_size": 4096, 00:15:12.087 "num_blocks": 26476544, 00:15:12.087 "uuid": "48c18944-342d-4d32-84f8-e18750e065da", 00:15:12.087 "assigned_rate_limits": { 00:15:12.087 "rw_ios_per_sec": 0, 00:15:12.087 "rw_mbytes_per_sec": 0, 00:15:12.087 "r_mbytes_per_sec": 0, 00:15:12.087 "w_mbytes_per_sec": 0 00:15:12.087 }, 00:15:12.087 "claimed": false, 00:15:12.087 "zoned": false, 00:15:12.087 "supported_io_types": { 00:15:12.087 "read": true, 00:15:12.087 "write": true, 00:15:12.087 "unmap": true, 00:15:12.087 "flush": false, 00:15:12.087 "reset": true, 00:15:12.087 "nvme_admin": false, 00:15:12.087 "nvme_io": false, 00:15:12.087 "nvme_io_md": false, 00:15:12.087 "write_zeroes": true, 00:15:12.087 "zcopy": false, 00:15:12.087 "get_zone_info": false, 00:15:12.087 "zone_management": false, 00:15:12.087 "zone_append": false, 00:15:12.087 "compare": false, 00:15:12.087 "compare_and_write": false, 00:15:12.087 "abort": false, 00:15:12.087 "seek_hole": true, 00:15:12.087 "seek_data": true, 00:15:12.087 "copy": false, 00:15:12.087 "nvme_iov_md": false 00:15:12.087 }, 00:15:12.087 "driver_specific": { 00:15:12.087 "lvol": { 00:15:12.087 "lvol_store_uuid": "5801bcce-ffbf-424d-92cc-384036302784", 00:15:12.087 "base_bdev": "nvme0n1", 00:15:12.087 "thin_provision": true, 00:15:12.087 "num_allocated_clusters": 0, 00:15:12.087 "snapshot": false, 00:15:12.087 "clone": false, 00:15:12.087 "esnap_clone": false 00:15:12.087 } 00:15:12.087 } 00:15:12.087 } 00:15:12.087 ]' 00:15:12.087 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:12.346 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:12.346 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:12.346 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:12.346 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:12.346 12:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:12.346 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:12.346 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:12.346 12:57:03 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:12.606 12:57:04 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:12.606 12:57:04 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:12.606 12:57:04 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 48c18944-342d-4d32-84f8-e18750e065da 00:15:12.606 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=48c18944-342d-4d32-84f8-e18750e065da 00:15:12.606 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:12.606 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:12.606 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:12.606 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48c18944-342d-4d32-84f8-e18750e065da 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:12.865 { 00:15:12.865 "name": "48c18944-342d-4d32-84f8-e18750e065da", 00:15:12.865 "aliases": [ 00:15:12.865 "lvs/nvme0n1p0" 00:15:12.865 ], 00:15:12.865 "product_name": "Logical Volume", 00:15:12.865 "block_size": 4096, 00:15:12.865 "num_blocks": 26476544, 00:15:12.865 "uuid": "48c18944-342d-4d32-84f8-e18750e065da", 00:15:12.865 "assigned_rate_limits": { 00:15:12.865 "rw_ios_per_sec": 0, 00:15:12.865 "rw_mbytes_per_sec": 0, 00:15:12.865 "r_mbytes_per_sec": 0, 00:15:12.865 "w_mbytes_per_sec": 0 00:15:12.865 }, 00:15:12.865 "claimed": false, 00:15:12.865 "zoned": false, 00:15:12.865 "supported_io_types": { 00:15:12.865 "read": true, 00:15:12.865 "write": true, 00:15:12.865 "unmap": true, 00:15:12.865 "flush": false, 00:15:12.865 "reset": true, 00:15:12.865 "nvme_admin": false, 00:15:12.865 "nvme_io": false, 00:15:12.865 "nvme_io_md": false, 00:15:12.865 "write_zeroes": true, 00:15:12.865 "zcopy": false, 00:15:12.865 "get_zone_info": false, 00:15:12.865 "zone_management": false, 00:15:12.865 "zone_append": false, 00:15:12.865 "compare": false, 00:15:12.865 "compare_and_write": false, 00:15:12.865 "abort": false, 00:15:12.865 "seek_hole": true, 00:15:12.865 "seek_data": true, 00:15:12.865 "copy": false, 00:15:12.865 "nvme_iov_md": false 00:15:12.865 }, 00:15:12.865 "driver_specific": { 00:15:12.865 "lvol": { 00:15:12.865 "lvol_store_uuid": "5801bcce-ffbf-424d-92cc-384036302784", 00:15:12.865 "base_bdev": "nvme0n1", 00:15:12.865 "thin_provision": true, 00:15:12.865 "num_allocated_clusters": 0, 00:15:12.865 "snapshot": false, 00:15:12.865 "clone": false, 00:15:12.865 "esnap_clone": false 00:15:12.865 } 00:15:12.865 } 00:15:12.865 } 00:15:12.865 ]' 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:12.865 12:57:04 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:13.124 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 48c18944-342d-4d32-84f8-e18750e065da 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=48c18944-342d-4d32-84f8-e18750e065da 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:13.124 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48c18944-342d-4d32-84f8-e18750e065da 00:15:13.383 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:13.383 { 00:15:13.383 "name": "48c18944-342d-4d32-84f8-e18750e065da", 00:15:13.383 "aliases": [ 00:15:13.383 "lvs/nvme0n1p0" 00:15:13.383 ], 00:15:13.383 "product_name": "Logical Volume", 00:15:13.383 "block_size": 4096, 00:15:13.383 "num_blocks": 26476544, 00:15:13.383 "uuid": "48c18944-342d-4d32-84f8-e18750e065da", 00:15:13.383 "assigned_rate_limits": { 00:15:13.383 "rw_ios_per_sec": 0, 00:15:13.383 "rw_mbytes_per_sec": 0, 00:15:13.383 "r_mbytes_per_sec": 0, 00:15:13.383 "w_mbytes_per_sec": 0 00:15:13.383 }, 00:15:13.383 "claimed": false, 00:15:13.383 "zoned": false, 00:15:13.383 "supported_io_types": { 00:15:13.383 "read": true, 00:15:13.383 "write": true, 00:15:13.383 "unmap": true, 00:15:13.383 "flush": false, 00:15:13.383 "reset": true, 00:15:13.383 "nvme_admin": false, 00:15:13.383 "nvme_io": false, 00:15:13.383 "nvme_io_md": false, 00:15:13.383 "write_zeroes": true, 00:15:13.383 "zcopy": false, 00:15:13.383 "get_zone_info": false, 00:15:13.383 "zone_management": false, 00:15:13.383 "zone_append": false, 00:15:13.383 "compare": false, 00:15:13.383 "compare_and_write": false, 00:15:13.383 "abort": false, 00:15:13.383 "seek_hole": true, 00:15:13.383 "seek_data": true, 00:15:13.383 "copy": false, 00:15:13.383 "nvme_iov_md": false 00:15:13.383 }, 00:15:13.383 "driver_specific": { 00:15:13.383 "lvol": { 00:15:13.383 "lvol_store_uuid": "5801bcce-ffbf-424d-92cc-384036302784", 00:15:13.383 "base_bdev": "nvme0n1", 00:15:13.383 "thin_provision": true, 00:15:13.383 "num_allocated_clusters": 0, 00:15:13.383 "snapshot": false, 00:15:13.383 "clone": false, 00:15:13.383 "esnap_clone": false 00:15:13.383 } 00:15:13.383 } 00:15:13.383 } 00:15:13.383 ]' 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:13.384 12:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 48c18944-342d-4d32-84f8-e18750e065da -c nvc0n1p0 --l2p_dram_limit 60 00:15:13.644 [2024-08-11 12:57:05.189021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.189099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:13.644 [2024-08-11 12:57:05.189122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:13.644 [2024-08-11 12:57:05.189137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.189251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.189272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:13.644 [2024-08-11 12:57:05.189287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:13.644 [2024-08-11 12:57:05.189298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.189413] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:13.644 [2024-08-11 12:57:05.189745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:13.644 [2024-08-11 12:57:05.189784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.189810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:13.644 [2024-08-11 12:57:05.189840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:15:13.644 [2024-08-11 12:57:05.189853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.190080] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 007f3197-2552-4b0a-9d91-8eeceb304277 00:15:13.644 [2024-08-11 12:57:05.191191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.191296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:13.644 [2024-08-11 12:57:05.191342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:13.644 [2024-08-11 12:57:05.191358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.195800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.195908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:13.644 [2024-08-11 12:57:05.195928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:15:13.644 [2024-08-11 12:57:05.195943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.196078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.196105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:13.644 [2024-08-11 12:57:05.196120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:15:13.644 [2024-08-11 12:57:05.196137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.196228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.196270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:13.644 [2024-08-11 12:57:05.196283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:13.644 [2024-08-11 12:57:05.196298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.196339] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:13.644 [2024-08-11 12:57:05.197814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.197866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:13.644 [2024-08-11 12:57:05.197915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:15:13.644 [2024-08-11 12:57:05.197927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.197979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.197998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:13.644 [2024-08-11 12:57:05.198013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:13.644 [2024-08-11 12:57:05.198024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.198089] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:13.644 [2024-08-11 12:57:05.198271] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:13.644 [2024-08-11 12:57:05.198296] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:13.644 [2024-08-11 12:57:05.198312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:15:13.644 [2024-08-11 12:57:05.198330] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:13.644 [2024-08-11 12:57:05.198345] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:13.644 [2024-08-11 12:57:05.198359] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:13.644 [2024-08-11 12:57:05.198371] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:13.644 [2024-08-11 12:57:05.198387] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:13.644 [2024-08-11 12:57:05.198399] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:13.644 [2024-08-11 12:57:05.198415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.198427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:13.644 [2024-08-11 12:57:05.198441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:15:13.644 [2024-08-11 12:57:05.198456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.198563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.644 [2024-08-11 12:57:05.198578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:13.644 [2024-08-11 12:57:05.198596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:15:13.644 [2024-08-11 12:57:05.198607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.644 [2024-08-11 12:57:05.198730] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:13.644 [2024-08-11 12:57:05.198758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:13.644 [2024-08-11 12:57:05.198776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:13.644 [2024-08-11 12:57:05.198789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.644 [2024-08-11 12:57:05.198806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:13.644 [2024-08-11 12:57:05.198817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:13.644 [2024-08-11 12:57:05.198831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:13.644 [2024-08-11 12:57:05.198842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:13.644 [2024-08-11 12:57:05.198855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:13.644 [2024-08-11 12:57:05.198878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:13.644 [2024-08-11 12:57:05.198895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:13.644 [2024-08-11 12:57:05.198908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:13.644 [2024-08-11 12:57:05.198921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:13.644 [2024-08-11 12:57:05.198948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:13.644 [2024-08-11 12:57:05.198967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:13.644 [2024-08-11 12:57:05.198978] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.644 [2024-08-11 12:57:05.198992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:13.644 [2024-08-11 12:57:05.199004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:13.644 [2024-08-11 12:57:05.199022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:13.645 [2024-08-11 12:57:05.199048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:13.645 [2024-08-11 12:57:05.199073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:13.645 [2024-08-11 12:57:05.199085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:13.645 [2024-08-11 12:57:05.199109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:13.645 [2024-08-11 12:57:05.199122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:13.645 [2024-08-11 12:57:05.199147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:13.645 [2024-08-11 12:57:05.199159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:13.645 [2024-08-11 12:57:05.199186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:13.645 [2024-08-11 12:57:05.199200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:13.645 [2024-08-11 12:57:05.199224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:13.645 [2024-08-11 12:57:05.199236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:13.645 [2024-08-11 12:57:05.199249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:13.645 [2024-08-11 12:57:05.199260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:13.645 [2024-08-11 12:57:05.199274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:13.645 [2024-08-11 12:57:05.199285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:13.645 [2024-08-11 12:57:05.199312] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:13.645 [2024-08-11 12:57:05.199341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199352] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:13.645 [2024-08-11 12:57:05.199366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:13.645 [2024-08-11 12:57:05.199378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:13.645 [2024-08-11 12:57:05.199393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:13.645 [2024-08-11 12:57:05.199405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:13.645 [2024-08-11 12:57:05.199419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:13.645 [2024-08-11 12:57:05.199431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:13.645 [2024-08-11 12:57:05.199446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:13.645 [2024-08-11 12:57:05.199457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:13.645 [2024-08-11 12:57:05.199470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:13.645 [2024-08-11 12:57:05.199506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:13.645 [2024-08-11 12:57:05.199532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:13.645 [2024-08-11 12:57:05.199546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:13.645 [2024-08-11 12:57:05.199561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:13.645 [2024-08-11 12:57:05.199573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:13.645 [2024-08-11 12:57:05.199587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:13.645 [2024-08-11 12:57:05.199600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:13.645 [2024-08-11 12:57:05.199614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:13.645 [2024-08-11 12:57:05.199626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:13.645 [2024-08-11 12:57:05.199644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:13.645 [2024-08-11 12:57:05.199657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:13.645 [2024-08-11 12:57:05.199671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:13.645 [2024-08-11 12:57:05.199687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:13.645 [2024-08-11 12:57:05.199701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:13.645 [2024-08-11 12:57:05.199713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:13.645 [2024-08-11 12:57:05.199728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:13.645 [2024-08-11 12:57:05.199740] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:13.645 [2024-08-11 12:57:05.199755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:13.645 [2024-08-11 12:57:05.199785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:13.645 [2024-08-11 12:57:05.199800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:13.645 [2024-08-11 12:57:05.199813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:13.645 [2024-08-11 12:57:05.199828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:13.645 [2024-08-11 12:57:05.199868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.645 [2024-08-11 12:57:05.199911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:13.645 [2024-08-11 12:57:05.199929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:15:13.645 [2024-08-11 12:57:05.199945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.645 [2024-08-11 12:57:05.200066] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:13.645 [2024-08-11 12:57:05.200094] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:16.260 [2024-08-11 12:57:07.478405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.260 [2024-08-11 12:57:07.478496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:16.260 [2024-08-11 12:57:07.478534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2278.353 ms 00:15:16.260 [2024-08-11 12:57:07.478548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.260 [2024-08-11 12:57:07.486224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.260 [2024-08-11 12:57:07.486326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:16.260 [2024-08-11 12:57:07.486347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.479 ms 00:15:16.260 [2024-08-11 12:57:07.486362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.260 [2024-08-11 12:57:07.486600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.260 [2024-08-11 12:57:07.486642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:16.260 [2024-08-11 12:57:07.486657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:15:16.260 [2024-08-11 12:57:07.486671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.260 [2024-08-11 12:57:07.503265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.260 [2024-08-11 12:57:07.503338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:16.260 [2024-08-11 12:57:07.503374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.518 ms 00:15:16.260 [2024-08-11 12:57:07.503388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.260 [2024-08-11 12:57:07.503442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.260 [2024-08-11 12:57:07.503461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:16.260 [2024-08-11 12:57:07.503542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:16.261 [2024-08-11 12:57:07.503571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.504038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.504093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:16.261 [2024-08-11 12:57:07.504140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:15:16.261 [2024-08-11 12:57:07.504155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.504353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.504386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:16.261 [2024-08-11 12:57:07.504401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:15:16.261 [2024-08-11 12:57:07.504415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.510185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.510251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:16.261 [2024-08-11 12:57:07.510300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.736 ms 00:15:16.261 [2024-08-11 12:57:07.510314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.518588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:16.261 [2024-08-11 12:57:07.532441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.532524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:16.261 [2024-08-11 12:57:07.532564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.989 ms 00:15:16.261 [2024-08-11 12:57:07.532576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.573712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.573793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:16.261 [2024-08-11 12:57:07.573834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.071 ms 00:15:16.261 [2024-08-11 12:57:07.573847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.574208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.574246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:16.261 [2024-08-11 12:57:07.574269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:15:16.261 [2024-08-11 12:57:07.574282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.578017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.578057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:16.261 [2024-08-11 12:57:07.578093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:15:16.261 [2024-08-11 12:57:07.578105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.581408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.581463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:16.261 [2024-08-11 12:57:07.581501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:15:16.261 [2024-08-11 12:57:07.581513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.581944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.581980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:16.261 [2024-08-11 12:57:07.582003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:15:16.261 [2024-08-11 12:57:07.582016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.611936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.612015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:16.261 [2024-08-11 12:57:07.612039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.874 ms 00:15:16.261 [2024-08-11 12:57:07.612053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.616623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.616680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:16.261 [2024-08-11 12:57:07.616699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:15:16.261 [2024-08-11 12:57:07.616714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.620565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.620617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:16.261 [2024-08-11 12:57:07.620651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.795 ms 00:15:16.261 [2024-08-11 12:57:07.620662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.624897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.624951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:16.261 [2024-08-11 12:57:07.624987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:15:16.261 [2024-08-11 12:57:07.624998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.625073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.625094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:16.261 [2024-08-11 12:57:07.625110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:16.261 [2024-08-11 12:57:07.625123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.625237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:16.261 [2024-08-11 12:57:07.625256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:16.261 [2024-08-11 12:57:07.625271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:16.261 [2024-08-11 12:57:07.625282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.261 [2024-08-11 12:57:07.626615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2437.029 ms, result 0 00:15:16.261 { 00:15:16.261 "name": "ftl0", 00:15:16.261 "uuid": "007f3197-2552-4b0a-9d91-8eeceb304277" 00:15:16.261 } 00:15:16.261 12:57:07 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:16.261 12:57:07 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:15:16.261 12:57:07 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:16.261 12:57:07 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:15:16.261 12:57:07 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:16.261 12:57:07 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:16.261 12:57:07 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:16.520 12:57:07 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:16.779 [ 00:15:16.779 { 00:15:16.779 "name": "ftl0", 00:15:16.779 "aliases": [ 00:15:16.779 "007f3197-2552-4b0a-9d91-8eeceb304277" 00:15:16.779 ], 00:15:16.779 "product_name": "FTL disk", 00:15:16.779 "block_size": 4096, 00:15:16.779 "num_blocks": 20971520, 00:15:16.779 "uuid": "007f3197-2552-4b0a-9d91-8eeceb304277", 00:15:16.779 "assigned_rate_limits": { 00:15:16.779 "rw_ios_per_sec": 0, 00:15:16.779 "rw_mbytes_per_sec": 0, 00:15:16.779 "r_mbytes_per_sec": 0, 00:15:16.779 "w_mbytes_per_sec": 0 00:15:16.779 }, 00:15:16.779 "claimed": false, 00:15:16.779 "zoned": false, 00:15:16.779 "supported_io_types": { 00:15:16.779 "read": true, 00:15:16.779 "write": true, 00:15:16.779 "unmap": true, 00:15:16.779 "flush": true, 00:15:16.779 "reset": false, 00:15:16.779 "nvme_admin": false, 00:15:16.779 "nvme_io": false, 00:15:16.779 "nvme_io_md": false, 00:15:16.779 "write_zeroes": true, 00:15:16.779 "zcopy": false, 00:15:16.779 "get_zone_info": false, 00:15:16.779 "zone_management": false, 00:15:16.779 "zone_append": false, 00:15:16.779 "compare": false, 00:15:16.779 "compare_and_write": false, 00:15:16.779 "abort": false, 00:15:16.779 "seek_hole": false, 00:15:16.779 "seek_data": false, 00:15:16.779 "copy": false, 00:15:16.779 "nvme_iov_md": false 00:15:16.779 }, 00:15:16.779 "driver_specific": { 00:15:16.779 "ftl": { 00:15:16.779 "base_bdev": "48c18944-342d-4d32-84f8-e18750e065da", 00:15:16.779 "cache": "nvc0n1p0" 00:15:16.779 } 00:15:16.779 } 00:15:16.779 } 00:15:16.779 ] 00:15:16.779 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:15:16.779 12:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:16.779 12:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:17.039 12:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:17.039 12:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:17.039 [2024-08-11 12:57:08.636099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.636370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:17.300 [2024-08-11 12:57:08.636408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:17.300 [2024-08-11 12:57:08.636425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.636485] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:17.300 [2024-08-11 12:57:08.637000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.637039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:17.300 [2024-08-11 12:57:08.637058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:15:17.300 [2024-08-11 12:57:08.637071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.637545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.637588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:17.300 [2024-08-11 12:57:08.637607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:15:17.300 [2024-08-11 12:57:08.637634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.641206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.641266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:17.300 [2024-08-11 12:57:08.641298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.504 ms 00:15:17.300 [2024-08-11 12:57:08.641309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.647449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.647480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:17.300 [2024-08-11 12:57:08.647512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:15:17.300 [2024-08-11 12:57:08.647524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.649026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.649063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:17.300 [2024-08-11 12:57:08.649083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:15:17.300 [2024-08-11 12:57:08.649094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.653283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.653350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:17.300 [2024-08-11 12:57:08.653385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:15:17.300 [2024-08-11 12:57:08.653397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.653581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.653600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:17.300 [2024-08-11 12:57:08.653616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:15:17.300 [2024-08-11 12:57:08.653628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.655456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.655491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:17.300 [2024-08-11 12:57:08.655523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.782 ms 00:15:17.300 [2024-08-11 12:57:08.655534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.657160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.657195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:17.300 [2024-08-11 12:57:08.657213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:15:17.300 [2024-08-11 12:57:08.657225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.658587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.658622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:17.300 [2024-08-11 12:57:08.658655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:15:17.300 [2024-08-11 12:57:08.658665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.659821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.300 [2024-08-11 12:57:08.659913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:17.300 [2024-08-11 12:57:08.659934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:15:17.300 [2024-08-11 12:57:08.659946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.300 [2024-08-11 12:57:08.660016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:17.300 [2024-08-11 12:57:08.660057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:17.300 [2024-08-11 12:57:08.660695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.660997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:17.301 [2024-08-11 12:57:08.661469] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:17.301 [2024-08-11 12:57:08.661485] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 007f3197-2552-4b0a-9d91-8eeceb304277 00:15:17.301 [2024-08-11 12:57:08.661497] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:17.301 [2024-08-11 12:57:08.661510] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:17.301 [2024-08-11 12:57:08.661522] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:17.301 [2024-08-11 12:57:08.661535] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:17.301 [2024-08-11 12:57:08.661546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:17.301 [2024-08-11 12:57:08.661560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:17.301 [2024-08-11 12:57:08.661571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:17.301 [2024-08-11 12:57:08.661585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:17.301 [2024-08-11 12:57:08.661597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:17.301 [2024-08-11 12:57:08.661611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.301 [2024-08-11 12:57:08.661625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:17.301 [2024-08-11 12:57:08.661639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:15:17.301 [2024-08-11 12:57:08.661650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.301 [2024-08-11 12:57:08.663158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.301 [2024-08-11 12:57:08.663188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:17.301 [2024-08-11 12:57:08.663206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:15:17.301 [2024-08-11 12:57:08.663218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.301 [2024-08-11 12:57:08.663354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.301 [2024-08-11 12:57:08.663375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:17.301 [2024-08-11 12:57:08.663391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:15:17.301 [2024-08-11 12:57:08.663402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.301 [2024-08-11 12:57:08.668759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.301 [2024-08-11 12:57:08.668817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:17.301 [2024-08-11 12:57:08.668866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.301 [2024-08-11 12:57:08.668910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.301 [2024-08-11 12:57:08.668981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.301 [2024-08-11 12:57:08.668998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:17.301 [2024-08-11 12:57:08.669027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.301 [2024-08-11 12:57:08.669054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.301 [2024-08-11 12:57:08.669210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.301 [2024-08-11 12:57:08.669242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:17.301 [2024-08-11 12:57:08.669275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.301 [2024-08-11 12:57:08.669287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.301 [2024-08-11 12:57:08.669322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.301 [2024-08-11 12:57:08.669340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:17.301 [2024-08-11 12:57:08.669354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.301 [2024-08-11 12:57:08.669365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.301 [2024-08-11 12:57:08.677987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.301 [2024-08-11 12:57:08.678067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:17.301 [2024-08-11 12:57:08.678104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.301 [2024-08-11 12:57:08.678116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.301 [2024-08-11 12:57:08.685199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.301 [2024-08-11 12:57:08.685262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:17.301 [2024-08-11 12:57:08.685296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.302 [2024-08-11 12:57:08.685308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.302 [2024-08-11 12:57:08.685412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.302 [2024-08-11 12:57:08.685432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:17.302 [2024-08-11 12:57:08.685449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.302 [2024-08-11 12:57:08.685478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.302 [2024-08-11 12:57:08.685544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.302 [2024-08-11 12:57:08.685561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:17.302 [2024-08-11 12:57:08.685578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.302 [2024-08-11 12:57:08.685589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.302 [2024-08-11 12:57:08.685696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.302 [2024-08-11 12:57:08.685714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:17.302 [2024-08-11 12:57:08.685729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.302 [2024-08-11 12:57:08.685740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.302 [2024-08-11 12:57:08.685812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.302 [2024-08-11 12:57:08.685841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:17.302 [2024-08-11 12:57:08.685856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.302 [2024-08-11 12:57:08.685882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.302 [2024-08-11 12:57:08.685960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.302 [2024-08-11 12:57:08.685976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:17.302 [2024-08-11 12:57:08.685992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.302 [2024-08-11 12:57:08.686004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.302 [2024-08-11 12:57:08.686066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.302 [2024-08-11 12:57:08.686083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:17.302 [2024-08-11 12:57:08.686101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.302 [2024-08-11 12:57:08.686112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.302 [2024-08-11 12:57:08.686315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.151 ms, result 0 00:15:17.302 true 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83363 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 83363 ']' 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 83363 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83363 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83363' 00:15:17.302 killing process with pid 83363 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 83363 00:15:17.302 12:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 83363 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:20.590 12:57:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:20.590 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:20.590 fio-3.35 00:15:20.590 Starting 1 thread 00:15:20.590 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:15:25.857 00:15:25.857 test: (groupid=0, jobs=1): err= 0: pid=83527: Sun Aug 11 12:57:16 2024 00:15:25.857 read: IOPS=896, BW=59.5MiB/s (62.4MB/s)(255MiB/4277msec) 00:15:25.857 slat (nsec): min=5414, max=40682, avg=7652.08, stdev=4012.81 00:15:25.857 clat (usec): min=338, max=1182, avg=497.12, stdev=54.37 00:15:25.857 lat (usec): min=345, max=1188, avg=504.77, stdev=55.02 00:15:25.857 clat percentiles (usec): 00:15:25.857 | 1.00th=[ 392], 5.00th=[ 437], 10.00th=[ 449], 20.00th=[ 461], 00:15:25.857 | 30.00th=[ 469], 40.00th=[ 478], 50.00th=[ 486], 60.00th=[ 494], 00:15:25.857 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 594], 00:15:25.857 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 906], 99.95th=[ 1057], 00:15:25.857 | 99.99th=[ 1188] 00:15:25.857 write: IOPS=902, BW=59.9MiB/s (62.8MB/s)(256MiB/4273msec); 0 zone resets 00:15:25.857 slat (usec): min=18, max=122, avg=25.13, stdev= 8.25 00:15:25.857 clat (usec): min=372, max=1475, avg=567.70, stdev=68.74 00:15:25.857 lat (usec): min=408, max=1505, avg=592.83, stdev=69.52 00:15:25.857 clat percentiles (usec): 00:15:25.857 | 1.00th=[ 453], 5.00th=[ 482], 10.00th=[ 498], 20.00th=[ 523], 00:15:25.857 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 570], 00:15:25.857 | 70.00th=[ 586], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 668], 00:15:25.857 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 1090], 99.95th=[ 1139], 00:15:25.857 | 99.99th=[ 1483] 00:15:25.857 bw ( KiB/s): min=56984, max=62832, per=100.00%, avg=61372.88, stdev=1874.26, samples=8 00:15:25.857 iops : min= 838, max= 924, avg=902.50, stdev=27.54, samples=8 00:15:25.857 lat (usec) : 500=37.72%, 750=61.18%, 1000=1.00% 00:15:25.857 lat (msec) : 2=0.10% 00:15:25.857 cpu : usr=98.95%, sys=0.21%, ctx=5, majf=0, minf=1326 00:15:25.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.857 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:25.857 00:15:25.857 Run status group 0 (all jobs): 00:15:25.857 READ: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=255MiB (267MB), run=4277-4277msec 00:15:25.857 WRITE: bw=59.9MiB/s (62.8MB/s), 59.9MiB/s-59.9MiB/s (62.8MB/s-62.8MB/s), io=256MiB (269MB), run=4273-4273msec 00:15:25.857 ----------------------------------------------------- 00:15:25.857 Suppressions used: 00:15:25.857 count bytes template 00:15:25.857 1 5 /usr/src/fio/parse.c 00:15:25.857 1 8 libtcmalloc_minimal.so 00:15:25.857 1 904 libcrypto.so 00:15:25.857 ----------------------------------------------------- 00:15:25.857 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:25.857 12:57:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:26.116 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:26.116 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:26.116 fio-3.35 00:15:26.116 Starting 2 threads 00:15:26.116 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:15:58.193 00:15:58.193 first_half: (groupid=0, jobs=1): err= 0: pid=83619: Sun Aug 11 12:57:47 2024 00:15:58.193 read: IOPS=2210, BW=8844KiB/s (9056kB/s)(255MiB/29539msec) 00:15:58.193 slat (nsec): min=4167, max=55608, avg=7499.85, stdev=2995.95 00:15:58.193 clat (usec): min=778, max=305536, avg=46646.23, stdev=20688.89 00:15:58.193 lat (usec): min=786, max=305544, avg=46653.73, stdev=20689.03 00:15:58.193 clat percentiles (msec): 00:15:58.193 | 1.00th=[ 19], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:15:58.193 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:15:58.193 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 53], 95.00th=[ 69], 00:15:58.193 | 99.00th=[ 163], 99.50th=[ 194], 99.90th=[ 232], 99.95th=[ 253], 00:15:58.193 | 99.99th=[ 296] 00:15:58.193 write: IOPS=2687, BW=10.5MiB/s (11.0MB/s)(256MiB/24389msec); 0 zone resets 00:15:58.193 slat (usec): min=5, max=446, avg= 9.70, stdev= 6.66 00:15:58.193 clat (usec): min=498, max=110684, avg=11178.96, stdev=18894.93 00:15:58.193 lat (usec): min=511, max=110692, avg=11188.66, stdev=18895.09 00:15:58.193 clat percentiles (usec): 00:15:58.193 | 1.00th=[ 1029], 5.00th=[ 1336], 10.00th=[ 1549], 20.00th=[ 2008], 00:15:58.193 | 30.00th=[ 3458], 40.00th=[ 4752], 50.00th=[ 5997], 60.00th=[ 7177], 00:15:58.193 | 70.00th=[ 8356], 80.00th=[ 12387], 90.00th=[ 15795], 95.00th=[ 46924], 00:15:58.193 | 99.00th=[ 96994], 99.50th=[101188], 99.90th=[106431], 99.95th=[108528], 00:15:58.193 | 99.99th=[109577] 00:15:58.193 bw ( KiB/s): min= 320, max=42096, per=99.67%, avg=19417.78, stdev=12300.53, samples=27 00:15:58.193 iops : min= 80, max=10524, avg=4854.37, stdev=3075.11, samples=27 00:15:58.193 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.35% 00:15:58.193 lat (msec) : 2=9.59%, 4=7.47%, 10=20.36%, 20=8.87%, 50=45.10% 00:15:58.193 lat (msec) : 100=6.54%, 250=1.63%, 500=0.03% 00:15:58.193 cpu : usr=99.01%, sys=0.29%, ctx=52, majf=0, minf=5527 00:15:58.193 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:58.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.193 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.193 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.193 second_half: (groupid=0, jobs=1): err= 0: pid=83620: Sun Aug 11 12:57:47 2024 00:15:58.193 read: IOPS=2193, BW=8774KiB/s (8984kB/s)(255MiB/29776msec) 00:15:58.193 slat (nsec): min=4275, max=58498, avg=7475.86, stdev=3037.11 00:15:58.193 clat (usec): min=848, max=325923, avg=45742.33, stdev=23671.27 00:15:58.194 lat (usec): min=857, max=325934, avg=45749.81, stdev=23671.42 00:15:58.194 clat percentiles (msec): 00:15:58.194 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 41], 00:15:58.194 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:15:58.194 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 49], 95.00th=[ 64], 00:15:58.194 | 99.00th=[ 180], 99.50th=[ 197], 99.90th=[ 257], 99.95th=[ 305], 00:15:58.194 | 99.99th=[ 321] 00:15:58.194 write: IOPS=2435, BW=9741KiB/s (9975kB/s)(256MiB/26912msec); 0 zone resets 00:15:58.194 slat (usec): min=5, max=127, avg= 9.66, stdev= 5.81 00:15:58.194 clat (usec): min=541, max=112462, avg=12541.08, stdev=20649.85 00:15:58.194 lat (usec): min=556, max=112473, avg=12550.74, stdev=20650.09 00:15:58.194 clat percentiles (usec): 00:15:58.194 | 1.00th=[ 971], 5.00th=[ 1303], 10.00th=[ 1516], 20.00th=[ 1827], 00:15:58.194 | 30.00th=[ 2212], 40.00th=[ 3359], 50.00th=[ 5604], 60.00th=[ 7177], 00:15:58.194 | 70.00th=[ 8979], 80.00th=[ 13435], 90.00th=[ 38011], 95.00th=[ 58983], 00:15:58.194 | 99.00th=[ 96994], 99.50th=[102237], 99.90th=[107480], 99.95th=[109577], 00:15:58.194 | 99.99th=[110625] 00:15:58.194 bw ( KiB/s): min= 160, max=51584, per=99.68%, avg=19419.44, stdev=13617.09, samples=27 00:15:58.194 iops : min= 40, max=12896, avg=4854.81, stdev=3404.26, samples=27 00:15:58.194 lat (usec) : 750=0.05%, 1000=0.56% 00:15:58.194 lat (msec) : 2=12.10%, 4=8.98%, 10=14.67%, 20=9.40%, 50=46.99% 00:15:58.194 lat (msec) : 100=5.18%, 250=2.01%, 500=0.06% 00:15:58.194 cpu : usr=98.93%, sys=0.37%, ctx=38, majf=0, minf=5603 00:15:58.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:58.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.194 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.194 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.194 00:15:58.194 Run status group 0 (all jobs): 00:15:58.194 READ: bw=17.1MiB/s (18.0MB/s), 8774KiB/s-8844KiB/s (8984kB/s-9056kB/s), io=510MiB (535MB), run=29539-29776msec 00:15:58.194 WRITE: bw=19.0MiB/s (19.9MB/s), 9741KiB/s-10.5MiB/s (9975kB/s-11.0MB/s), io=512MiB (537MB), run=24389-26912msec 00:15:58.194 ----------------------------------------------------- 00:15:58.194 Suppressions used: 00:15:58.194 count bytes template 00:15:58.194 2 10 /usr/src/fio/parse.c 00:15:58.194 4 384 /usr/src/fio/iolog.c 00:15:58.194 1 8 libtcmalloc_minimal.so 00:15:58.194 1 904 libcrypto.so 00:15:58.194 ----------------------------------------------------- 00:15:58.194 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:58.194 12:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:58.194 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:58.194 fio-3.35 00:15:58.194 Starting 1 thread 00:15:58.194 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:16.330 00:16:16.330 test: (groupid=0, jobs=1): err= 0: pid=83977: Sun Aug 11 12:58:06 2024 00:16:16.330 read: IOPS=5988, BW=23.4MiB/s (24.5MB/s)(255MiB/10888msec) 00:16:16.330 slat (nsec): min=4054, max=53837, avg=6887.04, stdev=3201.07 00:16:16.330 clat (usec): min=879, max=42315, avg=21364.04, stdev=1163.89 00:16:16.330 lat (usec): min=884, max=42323, avg=21370.93, stdev=1163.91 00:16:16.331 clat percentiles (usec): 00:16:16.331 | 1.00th=[20055], 5.00th=[20317], 10.00th=[20579], 20.00th=[20841], 00:16:16.331 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21103], 60.00th=[21365], 00:16:16.331 | 70.00th=[21627], 80.00th=[21627], 90.00th=[22152], 95.00th=[22676], 00:16:16.331 | 99.00th=[26084], 99.50th=[26870], 99.90th=[31851], 99.95th=[36963], 00:16:16.331 | 99.99th=[41681] 00:16:16.331 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(256MiB/5615msec); 0 zone resets 00:16:16.331 slat (usec): min=5, max=254, avg= 9.99, stdev= 6.31 00:16:16.331 clat (usec): min=669, max=65981, avg=10904.60, stdev=13926.16 00:16:16.331 lat (usec): min=680, max=65989, avg=10914.58, stdev=13926.21 00:16:16.331 clat percentiles (usec): 00:16:16.331 | 1.00th=[ 988], 5.00th=[ 1188], 10.00th=[ 1303], 20.00th=[ 1483], 00:16:16.331 | 30.00th=[ 1663], 40.00th=[ 2147], 50.00th=[ 7111], 60.00th=[ 8029], 00:16:16.331 | 70.00th=[ 9241], 80.00th=[10814], 90.00th=[40633], 95.00th=[42730], 00:16:16.331 | 99.00th=[47449], 99.50th=[50594], 99.90th=[57934], 99.95th=[61080], 00:16:16.331 | 99.99th=[64226] 00:16:16.331 bw ( KiB/s): min= 8064, max=65320, per=93.58%, avg=43690.67, stdev=14700.72, samples=12 00:16:16.331 iops : min= 2016, max=16330, avg=10922.67, stdev=3675.18, samples=12 00:16:16.331 lat (usec) : 750=0.01%, 1000=0.58% 00:16:16.331 lat (msec) : 2=18.78%, 4=1.56%, 10=16.88%, 20=4.61%, 50=57.31% 00:16:16.331 lat (msec) : 100=0.28% 00:16:16.331 cpu : usr=98.27%, sys=0.88%, ctx=27, majf=0, minf=5577 00:16:16.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:16.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.331 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.331 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.331 00:16:16.331 Run status group 0 (all jobs): 00:16:16.331 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=255MiB (267MB), run=10888-10888msec 00:16:16.331 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=256MiB (268MB), run=5615-5615msec 00:16:16.331 ----------------------------------------------------- 00:16:16.331 Suppressions used: 00:16:16.331 count bytes template 00:16:16.331 1 5 /usr/src/fio/parse.c 00:16:16.331 2 192 /usr/src/fio/iolog.c 00:16:16.331 1 8 libtcmalloc_minimal.so 00:16:16.331 1 904 libcrypto.so 00:16:16.331 ----------------------------------------------------- 00:16:16.331 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:16.331 Remove shared memory files 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid68881 /dev/shm/spdk_tgt_trace.pid82314 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:16.331 00:16:16.331 real 1m6.644s 00:16:16.331 user 2m32.874s 00:16:16.331 sys 0m3.479s 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:16.331 12:58:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.331 ************************************ 00:16:16.331 END TEST ftl_fio_basic 00:16:16.331 ************************************ 00:16:16.331 12:58:07 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:16.331 12:58:07 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:16.331 12:58:07 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:16.331 12:58:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:16.331 ************************************ 00:16:16.331 START TEST ftl_bdevperf 00:16:16.331 ************************************ 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:16.331 * Looking for test storage... 00:16:16.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=84235 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 84235 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 84235 ']' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:16.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:16.331 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:16.331 [2024-08-11 12:58:07.640164] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:16:16.331 [2024-08-11 12:58:07.640358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84235 ] 00:16:16.331 [2024-08-11 12:58:07.786782] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.331 [2024-08-11 12:58:07.820946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:16.331 12:58:07 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:16.899 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:16.899 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:16.899 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:16.899 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:16.899 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:16.899 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:16.899 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:16.899 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:17.159 { 00:16:17.159 "name": "nvme0n1", 00:16:17.159 "aliases": [ 00:16:17.159 "2feb8ecc-60db-4f1e-803c-feb5d3f7e2c7" 00:16:17.159 ], 00:16:17.159 "product_name": "NVMe disk", 00:16:17.159 "block_size": 4096, 00:16:17.159 "num_blocks": 1310720, 00:16:17.159 "uuid": "2feb8ecc-60db-4f1e-803c-feb5d3f7e2c7", 00:16:17.159 "assigned_rate_limits": { 00:16:17.159 "rw_ios_per_sec": 0, 00:16:17.159 "rw_mbytes_per_sec": 0, 00:16:17.159 "r_mbytes_per_sec": 0, 00:16:17.159 "w_mbytes_per_sec": 0 00:16:17.159 }, 00:16:17.159 "claimed": true, 00:16:17.159 "claim_type": "read_many_write_one", 00:16:17.159 "zoned": false, 00:16:17.159 "supported_io_types": { 00:16:17.159 "read": true, 00:16:17.159 "write": true, 00:16:17.159 "unmap": true, 00:16:17.159 "flush": true, 00:16:17.159 "reset": true, 00:16:17.159 "nvme_admin": true, 00:16:17.159 "nvme_io": true, 00:16:17.159 "nvme_io_md": false, 00:16:17.159 "write_zeroes": true, 00:16:17.159 "zcopy": false, 00:16:17.159 "get_zone_info": false, 00:16:17.159 "zone_management": false, 00:16:17.159 "zone_append": false, 00:16:17.159 "compare": true, 00:16:17.159 "compare_and_write": false, 00:16:17.159 "abort": true, 00:16:17.159 "seek_hole": false, 00:16:17.159 "seek_data": false, 00:16:17.159 "copy": true, 00:16:17.159 "nvme_iov_md": false 00:16:17.159 }, 00:16:17.159 "driver_specific": { 00:16:17.159 "nvme": [ 00:16:17.159 { 00:16:17.159 "pci_address": "0000:00:11.0", 00:16:17.159 "trid": { 00:16:17.159 "trtype": "PCIe", 00:16:17.159 "traddr": "0000:00:11.0" 00:16:17.159 }, 00:16:17.159 "ctrlr_data": { 00:16:17.159 "cntlid": 0, 00:16:17.159 "vendor_id": "0x1b36", 00:16:17.159 "model_number": "QEMU NVMe Ctrl", 00:16:17.159 "serial_number": "12341", 00:16:17.159 "firmware_revision": "8.0.0", 00:16:17.159 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:17.159 "oacs": { 00:16:17.159 "security": 0, 00:16:17.159 "format": 1, 00:16:17.159 "firmware": 0, 00:16:17.159 "ns_manage": 1 00:16:17.159 }, 00:16:17.159 "multi_ctrlr": false, 00:16:17.159 "ana_reporting": false 00:16:17.159 }, 00:16:17.159 "vs": { 00:16:17.159 "nvme_version": "1.4" 00:16:17.159 }, 00:16:17.159 "ns_data": { 00:16:17.159 "id": 1, 00:16:17.159 "can_share": false 00:16:17.159 } 00:16:17.159 } 00:16:17.159 ], 00:16:17.159 "mp_policy": "active_passive" 00:16:17.159 } 00:16:17.159 } 00:16:17.159 ]' 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:17.159 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:17.418 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5801bcce-ffbf-424d-92cc-384036302784 00:16:17.418 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:17.418 12:58:08 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5801bcce-ffbf-424d-92cc-384036302784 00:16:17.677 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:17.936 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6d8d02e4-5668-45d6-bd41-0f68ac65e99a 00:16:17.936 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6d8d02e4-5668-45d6-bd41-0f68ac65e99a 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:18.195 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:18.454 { 00:16:18.454 "name": "d4eb12d7-4c40-4b89-af5d-08ef0358a7e0", 00:16:18.454 "aliases": [ 00:16:18.454 "lvs/nvme0n1p0" 00:16:18.454 ], 00:16:18.454 "product_name": "Logical Volume", 00:16:18.454 "block_size": 4096, 00:16:18.454 "num_blocks": 26476544, 00:16:18.454 "uuid": "d4eb12d7-4c40-4b89-af5d-08ef0358a7e0", 00:16:18.454 "assigned_rate_limits": { 00:16:18.454 "rw_ios_per_sec": 0, 00:16:18.454 "rw_mbytes_per_sec": 0, 00:16:18.454 "r_mbytes_per_sec": 0, 00:16:18.454 "w_mbytes_per_sec": 0 00:16:18.454 }, 00:16:18.454 "claimed": false, 00:16:18.454 "zoned": false, 00:16:18.454 "supported_io_types": { 00:16:18.454 "read": true, 00:16:18.454 "write": true, 00:16:18.454 "unmap": true, 00:16:18.454 "flush": false, 00:16:18.454 "reset": true, 00:16:18.454 "nvme_admin": false, 00:16:18.454 "nvme_io": false, 00:16:18.454 "nvme_io_md": false, 00:16:18.454 "write_zeroes": true, 00:16:18.454 "zcopy": false, 00:16:18.454 "get_zone_info": false, 00:16:18.454 "zone_management": false, 00:16:18.454 "zone_append": false, 00:16:18.454 "compare": false, 00:16:18.454 "compare_and_write": false, 00:16:18.454 "abort": false, 00:16:18.454 "seek_hole": true, 00:16:18.454 "seek_data": true, 00:16:18.454 "copy": false, 00:16:18.454 "nvme_iov_md": false 00:16:18.454 }, 00:16:18.454 "driver_specific": { 00:16:18.454 "lvol": { 00:16:18.454 "lvol_store_uuid": "6d8d02e4-5668-45d6-bd41-0f68ac65e99a", 00:16:18.454 "base_bdev": "nvme0n1", 00:16:18.454 "thin_provision": true, 00:16:18.454 "num_allocated_clusters": 0, 00:16:18.454 "snapshot": false, 00:16:18.454 "clone": false, 00:16:18.454 "esnap_clone": false 00:16:18.454 } 00:16:18.454 } 00:16:18.454 } 00:16:18.454 ]' 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:18.454 12:58:09 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:18.714 12:58:10 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:18.714 12:58:10 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:18.714 12:58:10 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.714 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.714 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:18.714 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:18.714 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:18.714 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:18.973 { 00:16:18.973 "name": "d4eb12d7-4c40-4b89-af5d-08ef0358a7e0", 00:16:18.973 "aliases": [ 00:16:18.973 "lvs/nvme0n1p0" 00:16:18.973 ], 00:16:18.973 "product_name": "Logical Volume", 00:16:18.973 "block_size": 4096, 00:16:18.973 "num_blocks": 26476544, 00:16:18.973 "uuid": "d4eb12d7-4c40-4b89-af5d-08ef0358a7e0", 00:16:18.973 "assigned_rate_limits": { 00:16:18.973 "rw_ios_per_sec": 0, 00:16:18.973 "rw_mbytes_per_sec": 0, 00:16:18.973 "r_mbytes_per_sec": 0, 00:16:18.973 "w_mbytes_per_sec": 0 00:16:18.973 }, 00:16:18.973 "claimed": false, 00:16:18.973 "zoned": false, 00:16:18.973 "supported_io_types": { 00:16:18.973 "read": true, 00:16:18.973 "write": true, 00:16:18.973 "unmap": true, 00:16:18.973 "flush": false, 00:16:18.973 "reset": true, 00:16:18.973 "nvme_admin": false, 00:16:18.973 "nvme_io": false, 00:16:18.973 "nvme_io_md": false, 00:16:18.973 "write_zeroes": true, 00:16:18.973 "zcopy": false, 00:16:18.973 "get_zone_info": false, 00:16:18.973 "zone_management": false, 00:16:18.973 "zone_append": false, 00:16:18.973 "compare": false, 00:16:18.973 "compare_and_write": false, 00:16:18.973 "abort": false, 00:16:18.973 "seek_hole": true, 00:16:18.973 "seek_data": true, 00:16:18.973 "copy": false, 00:16:18.973 "nvme_iov_md": false 00:16:18.973 }, 00:16:18.973 "driver_specific": { 00:16:18.973 "lvol": { 00:16:18.973 "lvol_store_uuid": "6d8d02e4-5668-45d6-bd41-0f68ac65e99a", 00:16:18.973 "base_bdev": "nvme0n1", 00:16:18.973 "thin_provision": true, 00:16:18.973 "num_allocated_clusters": 0, 00:16:18.973 "snapshot": false, 00:16:18.973 "clone": false, 00:16:18.973 "esnap_clone": false 00:16:18.973 } 00:16:18.973 } 00:16:18.973 } 00:16:18.973 ]' 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:18.973 12:58:10 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:19.232 12:58:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:19.232 12:58:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:19.232 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:19.232 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:19.232 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:19.232 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:19.232 12:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 00:16:19.491 12:58:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:19.491 { 00:16:19.491 "name": "d4eb12d7-4c40-4b89-af5d-08ef0358a7e0", 00:16:19.491 "aliases": [ 00:16:19.491 "lvs/nvme0n1p0" 00:16:19.491 ], 00:16:19.491 "product_name": "Logical Volume", 00:16:19.491 "block_size": 4096, 00:16:19.491 "num_blocks": 26476544, 00:16:19.491 "uuid": "d4eb12d7-4c40-4b89-af5d-08ef0358a7e0", 00:16:19.491 "assigned_rate_limits": { 00:16:19.491 "rw_ios_per_sec": 0, 00:16:19.491 "rw_mbytes_per_sec": 0, 00:16:19.491 "r_mbytes_per_sec": 0, 00:16:19.491 "w_mbytes_per_sec": 0 00:16:19.491 }, 00:16:19.491 "claimed": false, 00:16:19.491 "zoned": false, 00:16:19.491 "supported_io_types": { 00:16:19.491 "read": true, 00:16:19.491 "write": true, 00:16:19.491 "unmap": true, 00:16:19.491 "flush": false, 00:16:19.491 "reset": true, 00:16:19.491 "nvme_admin": false, 00:16:19.491 "nvme_io": false, 00:16:19.491 "nvme_io_md": false, 00:16:19.491 "write_zeroes": true, 00:16:19.491 "zcopy": false, 00:16:19.491 "get_zone_info": false, 00:16:19.491 "zone_management": false, 00:16:19.491 "zone_append": false, 00:16:19.491 "compare": false, 00:16:19.491 "compare_and_write": false, 00:16:19.491 "abort": false, 00:16:19.491 "seek_hole": true, 00:16:19.491 "seek_data": true, 00:16:19.491 "copy": false, 00:16:19.491 "nvme_iov_md": false 00:16:19.491 }, 00:16:19.491 "driver_specific": { 00:16:19.491 "lvol": { 00:16:19.491 "lvol_store_uuid": "6d8d02e4-5668-45d6-bd41-0f68ac65e99a", 00:16:19.491 "base_bdev": "nvme0n1", 00:16:19.491 "thin_provision": true, 00:16:19.491 "num_allocated_clusters": 0, 00:16:19.491 "snapshot": false, 00:16:19.491 "clone": false, 00:16:19.491 "esnap_clone": false 00:16:19.491 } 00:16:19.491 } 00:16:19.491 } 00:16:19.491 ]' 00:16:19.491 12:58:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:19.750 12:58:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:19.750 12:58:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:19.750 12:58:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:19.750 12:58:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:19.750 12:58:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:19.750 12:58:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:19.750 12:58:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d4eb12d7-4c40-4b89-af5d-08ef0358a7e0 -c nvc0n1p0 --l2p_dram_limit 20 00:16:19.750 [2024-08-11 12:58:11.341025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.750 [2024-08-11 12:58:11.341079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:19.750 [2024-08-11 12:58:11.341113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:19.750 [2024-08-11 12:58:11.341127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.750 [2024-08-11 12:58:11.341200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.750 [2024-08-11 12:58:11.341220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:19.750 [2024-08-11 12:58:11.341232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:19.750 [2024-08-11 12:58:11.341247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.750 [2024-08-11 12:58:11.341308] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:19.750 [2024-08-11 12:58:11.341610] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:19.750 [2024-08-11 12:58:11.341650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.750 [2024-08-11 12:58:11.341665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:19.750 [2024-08-11 12:58:11.341693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:16:19.750 [2024-08-11 12:58:11.341707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.751 [2024-08-11 12:58:11.341953] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1058ea62-cfc7-4252-8140-5e997c5986d5 00:16:19.751 [2024-08-11 12:58:11.342924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.751 [2024-08-11 12:58:11.342982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:19.751 [2024-08-11 12:58:11.343002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:19.751 [2024-08-11 12:58:11.343015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.010 [2024-08-11 12:58:11.347807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.010 [2024-08-11 12:58:11.347917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:20.010 [2024-08-11 12:58:11.347944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.729 ms 00:16:20.010 [2024-08-11 12:58:11.347968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.010 [2024-08-11 12:58:11.348068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.010 [2024-08-11 12:58:11.348086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:20.010 [2024-08-11 12:58:11.348102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:20.010 [2024-08-11 12:58:11.348114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.010 [2024-08-11 12:58:11.348178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.010 [2024-08-11 12:58:11.348196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:20.010 [2024-08-11 12:58:11.348214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:20.010 [2024-08-11 12:58:11.348229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.010 [2024-08-11 12:58:11.348272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:20.010 [2024-08-11 12:58:11.349801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.010 [2024-08-11 12:58:11.349853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:20.010 [2024-08-11 12:58:11.349915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:16:20.010 [2024-08-11 12:58:11.349958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.010 [2024-08-11 12:58:11.350023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.010 [2024-08-11 12:58:11.350042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:20.010 [2024-08-11 12:58:11.350054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:20.010 [2024-08-11 12:58:11.350074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.010 [2024-08-11 12:58:11.350096] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:20.010 [2024-08-11 12:58:11.350248] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:20.010 [2024-08-11 12:58:11.350268] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:20.010 [2024-08-11 12:58:11.350296] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:20.010 [2024-08-11 12:58:11.350314] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:20.010 [2024-08-11 12:58:11.350333] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:20.010 [2024-08-11 12:58:11.350347] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:20.010 [2024-08-11 12:58:11.350360] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:20.010 [2024-08-11 12:58:11.350372] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:20.010 [2024-08-11 12:58:11.350384] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:20.010 [2024-08-11 12:58:11.350396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.010 [2024-08-11 12:58:11.350413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:20.010 [2024-08-11 12:58:11.350426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:16:20.010 [2024-08-11 12:58:11.350439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.010 [2024-08-11 12:58:11.350541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.010 [2024-08-11 12:58:11.350574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:20.010 [2024-08-11 12:58:11.350586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:20.010 [2024-08-11 12:58:11.350599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.010 [2024-08-11 12:58:11.350724] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:20.010 [2024-08-11 12:58:11.350757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:20.010 [2024-08-11 12:58:11.350770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.010 [2024-08-11 12:58:11.350784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.010 [2024-08-11 12:58:11.350796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:20.010 [2024-08-11 12:58:11.350809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:20.010 [2024-08-11 12:58:11.350820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:20.010 [2024-08-11 12:58:11.350833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:20.010 [2024-08-11 12:58:11.350844] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:20.010 [2024-08-11 12:58:11.350860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.010 [2024-08-11 12:58:11.350871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:20.010 [2024-08-11 12:58:11.350884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:20.010 [2024-08-11 12:58:11.350894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.010 [2024-08-11 12:58:11.350910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:20.010 [2024-08-11 12:58:11.350921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:20.010 [2024-08-11 12:58:11.350934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.010 [2024-08-11 12:58:11.350945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:20.011 [2024-08-11 12:58:11.350958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:20.011 [2024-08-11 12:58:11.350984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:20.011 [2024-08-11 12:58:11.351015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.011 [2024-08-11 12:58:11.351039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:20.011 [2024-08-11 12:58:11.351052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.011 [2024-08-11 12:58:11.351076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:20.011 [2024-08-11 12:58:11.351090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.011 [2024-08-11 12:58:11.351114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:20.011 [2024-08-11 12:58:11.351129] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.011 [2024-08-11 12:58:11.351152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:20.011 [2024-08-11 12:58:11.351164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.011 [2024-08-11 12:58:11.351187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:20.011 [2024-08-11 12:58:11.351202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:20.011 [2024-08-11 12:58:11.351213] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.011 [2024-08-11 12:58:11.351226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:20.011 [2024-08-11 12:58:11.351237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:20.011 [2024-08-11 12:58:11.351250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:20.011 [2024-08-11 12:58:11.351288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:20.011 [2024-08-11 12:58:11.351298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351310] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:20.011 [2024-08-11 12:58:11.351322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:20.011 [2024-08-11 12:58:11.351350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.011 [2024-08-11 12:58:11.351362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.011 [2024-08-11 12:58:11.351383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:20.011 [2024-08-11 12:58:11.351395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:20.011 [2024-08-11 12:58:11.351422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:20.011 [2024-08-11 12:58:11.351433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:20.011 [2024-08-11 12:58:11.351445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:20.011 [2024-08-11 12:58:11.351456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:20.011 [2024-08-11 12:58:11.351476] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:20.011 [2024-08-11 12:58:11.351489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.011 [2024-08-11 12:58:11.351504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:20.011 [2024-08-11 12:58:11.351515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:20.011 [2024-08-11 12:58:11.351528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:20.011 [2024-08-11 12:58:11.351539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:20.011 [2024-08-11 12:58:11.351552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:20.011 [2024-08-11 12:58:11.351563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:20.011 [2024-08-11 12:58:11.351578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:20.011 [2024-08-11 12:58:11.351589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:20.011 [2024-08-11 12:58:11.351602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:20.011 [2024-08-11 12:58:11.351622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:20.011 [2024-08-11 12:58:11.351636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:20.011 [2024-08-11 12:58:11.351647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:20.011 [2024-08-11 12:58:11.351660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:20.011 [2024-08-11 12:58:11.351672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:20.011 [2024-08-11 12:58:11.351685] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:20.011 [2024-08-11 12:58:11.351698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.011 [2024-08-11 12:58:11.351712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:20.011 [2024-08-11 12:58:11.351723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:20.011 [2024-08-11 12:58:11.351735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:20.011 [2024-08-11 12:58:11.351747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:20.011 [2024-08-11 12:58:11.351761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.011 [2024-08-11 12:58:11.351773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:20.011 [2024-08-11 12:58:11.351790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:16:20.011 [2024-08-11 12:58:11.351815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.011 [2024-08-11 12:58:11.351933] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:20.011 [2024-08-11 12:58:11.351955] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:22.541 [2024-08-11 12:58:13.588820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.541 [2024-08-11 12:58:13.588935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:22.541 [2024-08-11 12:58:13.588974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2236.894 ms 00:16:22.541 [2024-08-11 12:58:13.588986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.541 [2024-08-11 12:58:13.606577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.606644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:22.542 [2024-08-11 12:58:13.606671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.505 ms 00:16:22.542 [2024-08-11 12:58:13.606687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.606831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.606850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:22.542 [2024-08-11 12:58:13.606886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:22.542 [2024-08-11 12:58:13.606903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.615837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.615926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:22.542 [2024-08-11 12:58:13.615952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.807 ms 00:16:22.542 [2024-08-11 12:58:13.615968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.616017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.616036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:22.542 [2024-08-11 12:58:13.616054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:22.542 [2024-08-11 12:58:13.616069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.616467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.616502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:22.542 [2024-08-11 12:58:13.616539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:16:22.542 [2024-08-11 12:58:13.616550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.616705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.616722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:22.542 [2024-08-11 12:58:13.616736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:16:22.542 [2024-08-11 12:58:13.616748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.621153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.621204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:22.542 [2024-08-11 12:58:13.621239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:16:22.542 [2024-08-11 12:58:13.621249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.628950] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:22.542 [2024-08-11 12:58:13.633489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.633555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:22.542 [2024-08-11 12:58:13.633578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.119 ms 00:16:22.542 [2024-08-11 12:58:13.633595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.681435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.681525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:22.542 [2024-08-11 12:58:13.681544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.808 ms 00:16:22.542 [2024-08-11 12:58:13.681559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.681767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.681818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:22.542 [2024-08-11 12:58:13.681831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:16:22.542 [2024-08-11 12:58:13.681843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.685754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.685827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:22.542 [2024-08-11 12:58:13.685843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.885 ms 00:16:22.542 [2024-08-11 12:58:13.685856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.689116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.689177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:22.542 [2024-08-11 12:58:13.689194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.186 ms 00:16:22.542 [2024-08-11 12:58:13.689207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.689576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.689606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:22.542 [2024-08-11 12:58:13.689620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:16:22.542 [2024-08-11 12:58:13.689634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.721033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.721137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:22.542 [2024-08-11 12:58:13.721156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.372 ms 00:16:22.542 [2024-08-11 12:58:13.721170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.725362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.725434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:22.542 [2024-08-11 12:58:13.725459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.150 ms 00:16:22.542 [2024-08-11 12:58:13.725473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.728991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.729085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:22.542 [2024-08-11 12:58:13.729101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.476 ms 00:16:22.542 [2024-08-11 12:58:13.729113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.733199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.733261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:22.542 [2024-08-11 12:58:13.733278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:16:22.542 [2024-08-11 12:58:13.733294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.733383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.733407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:22.542 [2024-08-11 12:58:13.733420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:22.542 [2024-08-11 12:58:13.733433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.733525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.542 [2024-08-11 12:58:13.733543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:22.542 [2024-08-11 12:58:13.733555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:22.542 [2024-08-11 12:58:13.733568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.542 [2024-08-11 12:58:13.734710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2393.163 ms, result 0 00:16:22.542 { 00:16:22.542 "name": "ftl0", 00:16:22.542 "uuid": "1058ea62-cfc7-4252-8140-5e997c5986d5" 00:16:22.542 } 00:16:22.542 12:58:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:22.542 12:58:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:22.542 12:58:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:22.542 12:58:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:22.542 [2024-08-11 12:58:14.121095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:22.542 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:22.542 Zero copy mechanism will not be used. 00:16:22.542 Running I/O for 4 seconds... 00:16:26.731 00:16:26.731 Latency(us) 00:16:26.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.732 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:26.732 ftl0 : 4.00 1702.36 113.05 0.00 0.00 615.46 249.48 1176.67 00:16:26.732 =================================================================================================================== 00:16:26.732 Total : 1702.36 113.05 0.00 0.00 615.46 249.48 1176.67 00:16:26.732 [2024-08-11 12:58:18.127676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:26.732 0 00:16:26.732 12:58:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:26.732 [2024-08-11 12:58:18.273892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:26.732 Running I/O for 4 seconds... 00:16:30.919 00:16:30.919 Latency(us) 00:16:30.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.920 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:30.920 ftl0 : 4.02 7365.00 28.77 0.00 0.00 17326.83 316.51 32172.22 00:16:30.920 =================================================================================================================== 00:16:30.920 Total : 7365.00 28.77 0.00 0.00 17326.83 0.00 32172.22 00:16:30.920 [2024-08-11 12:58:22.301042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:30.920 0 00:16:30.920 12:58:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:30.920 [2024-08-11 12:58:22.449354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:30.920 Running I/O for 4 seconds... 00:16:35.179 00:16:35.179 Latency(us) 00:16:35.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.179 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:35.179 Verification LBA range: start 0x0 length 0x1400000 00:16:35.179 ftl0 : 4.01 5157.30 20.15 0.00 0.00 24724.88 366.78 27525.12 00:16:35.179 =================================================================================================================== 00:16:35.179 Total : 5157.30 20.15 0.00 0.00 24724.88 0.00 27525.12 00:16:35.179 [2024-08-11 12:58:26.470427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:35.179 0 00:16:35.179 12:58:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:35.179 [2024-08-11 12:58:26.752397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.179 [2024-08-11 12:58:26.752479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:35.179 [2024-08-11 12:58:26.752497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:35.179 [2024-08-11 12:58:26.752509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.179 [2024-08-11 12:58:26.752537] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.179 [2024-08-11 12:58:26.752990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.179 [2024-08-11 12:58:26.753018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:35.179 [2024-08-11 12:58:26.753044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:16:35.179 [2024-08-11 12:58:26.753056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.179 [2024-08-11 12:58:26.754662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.179 [2024-08-11 12:58:26.754748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:35.179 [2024-08-11 12:58:26.754785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:16:35.179 [2024-08-11 12:58:26.754796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.933330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.933400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:35.440 [2024-08-11 12:58:26.933423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.503 ms 00:16:35.440 [2024-08-11 12:58:26.933435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.939297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.939345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:35.440 [2024-08-11 12:58:26.939361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.816 ms 00:16:35.440 [2024-08-11 12:58:26.939374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.940858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.940917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:35.440 [2024-08-11 12:58:26.940950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:16:35.440 [2024-08-11 12:58:26.940961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.944825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.944876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:35.440 [2024-08-11 12:58:26.944914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:16:35.440 [2024-08-11 12:58:26.944926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.945052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.945086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:35.440 [2024-08-11 12:58:26.945139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:35.440 [2024-08-11 12:58:26.945151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.946789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.946866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:35.440 [2024-08-11 12:58:26.946881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:16:35.440 [2024-08-11 12:58:26.946907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.948349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.948394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:35.440 [2024-08-11 12:58:26.948410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:16:35.440 [2024-08-11 12:58:26.948421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.949519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.949563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:35.440 [2024-08-11 12:58:26.949592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:16:35.440 [2024-08-11 12:58:26.949602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.950927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.440 [2024-08-11 12:58:26.951013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:35.440 [2024-08-11 12:58:26.951032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:16:35.440 [2024-08-11 12:58:26.951043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.440 [2024-08-11 12:58:26.951082] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:35.440 [2024-08-11 12:58:26.951103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:35.440 [2024-08-11 12:58:26.951614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.951997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:35.441 [2024-08-11 12:58:26.952503] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:35.441 [2024-08-11 12:58:26.952516] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1058ea62-cfc7-4252-8140-5e997c5986d5 00:16:35.441 [2024-08-11 12:58:26.952528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:35.441 [2024-08-11 12:58:26.952540] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:35.441 [2024-08-11 12:58:26.952550] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:35.441 [2024-08-11 12:58:26.952564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:35.441 [2024-08-11 12:58:26.952576] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:35.441 [2024-08-11 12:58:26.952590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:35.441 [2024-08-11 12:58:26.952610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:35.441 [2024-08-11 12:58:26.952623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:35.441 [2024-08-11 12:58:26.952632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:35.441 [2024-08-11 12:58:26.952645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.441 [2024-08-11 12:58:26.952655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:35.441 [2024-08-11 12:58:26.952677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:16:35.441 [2024-08-11 12:58:26.952687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.441 [2024-08-11 12:58:26.954048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.441 [2024-08-11 12:58:26.954075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:35.441 [2024-08-11 12:58:26.954110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:16:35.441 [2024-08-11 12:58:26.954133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.441 [2024-08-11 12:58:26.954267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.441 [2024-08-11 12:58:26.954285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:35.441 [2024-08-11 12:58:26.954300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:35.441 [2024-08-11 12:58:26.954311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.441 [2024-08-11 12:58:26.958666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.441 [2024-08-11 12:58:26.958694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.441 [2024-08-11 12:58:26.958712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.441 [2024-08-11 12:58:26.958722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.441 [2024-08-11 12:58:26.958775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.441 [2024-08-11 12:58:26.958788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.441 [2024-08-11 12:58:26.958800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.441 [2024-08-11 12:58:26.958809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.441 [2024-08-11 12:58:26.958904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.441 [2024-08-11 12:58:26.958953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.441 [2024-08-11 12:58:26.958971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.441 [2024-08-11 12:58:26.958984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.441 [2024-08-11 12:58:26.959008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.959021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.442 [2024-08-11 12:58:26.959033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.959043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.967059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.967119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.442 [2024-08-11 12:58:26.967141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.967152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.973845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.973978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.442 [2024-08-11 12:58:26.974000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.974012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.974127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.974145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.442 [2024-08-11 12:58:26.974160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.974171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.974221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.974252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.442 [2024-08-11 12:58:26.974295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.974315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.974435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.974468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.442 [2024-08-11 12:58:26.974490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.974501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.974550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.974566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:35.442 [2024-08-11 12:58:26.974579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.974589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.974632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.974646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.442 [2024-08-11 12:58:26.974658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.974668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.974721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.442 [2024-08-11 12:58:26.974736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.442 [2024-08-11 12:58:26.974748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.442 [2024-08-11 12:58:26.974758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.442 [2024-08-11 12:58:26.974959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 222.462 ms, result 0 00:16:35.442 true 00:16:35.442 12:58:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 84235 00:16:35.442 12:58:26 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 84235 ']' 00:16:35.442 12:58:26 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 84235 00:16:35.442 12:58:26 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:16:35.442 12:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:35.442 12:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84235 00:16:35.701 killing process with pid 84235 00:16:35.701 Received shutdown signal, test time was about 4.000000 seconds 00:16:35.701 00:16:35.701 Latency(us) 00:16:35.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.701 =================================================================================================================== 00:16:35.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:35.701 12:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:35.701 12:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:35.701 12:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84235' 00:16:35.701 12:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 84235 00:16:35.701 12:58:27 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 84235 00:16:38.236 12:58:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:38.236 12:58:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:38.236 12:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.236 12:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.496 Remove shared memory files 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:38.496 ************************************ 00:16:38.496 END TEST ftl_bdevperf 00:16:38.496 ************************************ 00:16:38.496 00:16:38.496 real 0m22.430s 00:16:38.496 user 0m25.754s 00:16:38.496 sys 0m0.955s 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:38.496 12:58:29 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.496 12:58:29 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:38.496 12:58:29 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:38.496 12:58:29 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:38.496 12:58:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:38.496 ************************************ 00:16:38.496 START TEST ftl_trim 00:16:38.496 ************************************ 00:16:38.496 12:58:29 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:38.496 * Looking for test storage... 00:16:38.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=84575 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:38.496 12:58:30 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 84575 00:16:38.496 12:58:30 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 84575 ']' 00:16:38.496 12:58:30 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.496 12:58:30 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:38.496 12:58:30 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.496 12:58:30 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:38.496 12:58:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:38.755 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:38.755 [2024-08-11 12:58:30.176587] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:16:38.755 [2024-08-11 12:58:30.176796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84575 ] 00:16:38.755 [2024-08-11 12:58:30.328547] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.013 [2024-08-11 12:58:30.373304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.013 [2024-08-11 12:58:30.373378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.013 [2024-08-11 12:58:30.373433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.581 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:39.581 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:16:39.581 12:58:31 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:39.581 12:58:31 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:39.581 12:58:31 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:39.581 12:58:31 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:39.581 12:58:31 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:39.581 12:58:31 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:40.148 12:58:31 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:40.148 12:58:31 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:40.148 12:58:31 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:40.148 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:40.148 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:40.148 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:40.148 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:40.148 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:40.148 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:40.148 { 00:16:40.148 "name": "nvme0n1", 00:16:40.148 "aliases": [ 00:16:40.148 "8cfa6f63-8c6a-437e-a146-5d4894b83642" 00:16:40.148 ], 00:16:40.148 "product_name": "NVMe disk", 00:16:40.148 "block_size": 4096, 00:16:40.148 "num_blocks": 1310720, 00:16:40.148 "uuid": "8cfa6f63-8c6a-437e-a146-5d4894b83642", 00:16:40.148 "assigned_rate_limits": { 00:16:40.148 "rw_ios_per_sec": 0, 00:16:40.148 "rw_mbytes_per_sec": 0, 00:16:40.148 "r_mbytes_per_sec": 0, 00:16:40.148 "w_mbytes_per_sec": 0 00:16:40.148 }, 00:16:40.148 "claimed": true, 00:16:40.148 "claim_type": "read_many_write_one", 00:16:40.148 "zoned": false, 00:16:40.148 "supported_io_types": { 00:16:40.148 "read": true, 00:16:40.148 "write": true, 00:16:40.148 "unmap": true, 00:16:40.148 "flush": true, 00:16:40.148 "reset": true, 00:16:40.148 "nvme_admin": true, 00:16:40.148 "nvme_io": true, 00:16:40.148 "nvme_io_md": false, 00:16:40.148 "write_zeroes": true, 00:16:40.148 "zcopy": false, 00:16:40.148 "get_zone_info": false, 00:16:40.148 "zone_management": false, 00:16:40.148 "zone_append": false, 00:16:40.148 "compare": true, 00:16:40.148 "compare_and_write": false, 00:16:40.148 "abort": true, 00:16:40.148 "seek_hole": false, 00:16:40.148 "seek_data": false, 00:16:40.148 "copy": true, 00:16:40.148 "nvme_iov_md": false 00:16:40.148 }, 00:16:40.148 "driver_specific": { 00:16:40.148 "nvme": [ 00:16:40.148 { 00:16:40.148 "pci_address": "0000:00:11.0", 00:16:40.148 "trid": { 00:16:40.148 "trtype": "PCIe", 00:16:40.148 "traddr": "0000:00:11.0" 00:16:40.148 }, 00:16:40.148 "ctrlr_data": { 00:16:40.148 "cntlid": 0, 00:16:40.148 "vendor_id": "0x1b36", 00:16:40.148 "model_number": "QEMU NVMe Ctrl", 00:16:40.148 "serial_number": "12341", 00:16:40.148 "firmware_revision": "8.0.0", 00:16:40.148 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:40.148 "oacs": { 00:16:40.148 "security": 0, 00:16:40.148 "format": 1, 00:16:40.148 "firmware": 0, 00:16:40.148 "ns_manage": 1 00:16:40.148 }, 00:16:40.148 "multi_ctrlr": false, 00:16:40.148 "ana_reporting": false 00:16:40.148 }, 00:16:40.148 "vs": { 00:16:40.148 "nvme_version": "1.4" 00:16:40.148 }, 00:16:40.148 "ns_data": { 00:16:40.148 "id": 1, 00:16:40.148 "can_share": false 00:16:40.148 } 00:16:40.148 } 00:16:40.148 ], 00:16:40.148 "mp_policy": "active_passive" 00:16:40.148 } 00:16:40.148 } 00:16:40.148 ]' 00:16:40.148 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:40.407 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:40.407 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:40.407 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:40.407 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:40.407 12:58:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:16:40.407 12:58:31 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:40.407 12:58:31 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:40.407 12:58:31 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:40.407 12:58:31 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:40.407 12:58:31 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:40.666 12:58:32 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6d8d02e4-5668-45d6-bd41-0f68ac65e99a 00:16:40.666 12:58:32 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:40.666 12:58:32 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d8d02e4-5668-45d6-bd41-0f68ac65e99a 00:16:40.925 12:58:32 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:41.183 12:58:32 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7b318b06-58c0-48c6-8115-6e36e6847ed9 00:16:41.183 12:58:32 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7b318b06-58c0-48c6-8115-6e36e6847ed9 00:16:41.442 12:58:32 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=6c3709ad-543b-4403-8168-46825a7e39be 00:16:41.442 12:58:32 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6c3709ad-543b-4403-8168-46825a7e39be 00:16:41.442 12:58:32 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:41.442 12:58:32 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:41.442 12:58:32 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=6c3709ad-543b-4403-8168-46825a7e39be 00:16:41.442 12:58:32 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:41.442 12:58:32 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 6c3709ad-543b-4403-8168-46825a7e39be 00:16:41.442 12:58:32 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=6c3709ad-543b-4403-8168-46825a7e39be 00:16:41.442 12:58:32 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:41.442 12:58:32 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:41.442 12:58:32 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:41.442 12:58:32 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c3709ad-543b-4403-8168-46825a7e39be 00:16:41.718 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:41.718 { 00:16:41.718 "name": "6c3709ad-543b-4403-8168-46825a7e39be", 00:16:41.718 "aliases": [ 00:16:41.718 "lvs/nvme0n1p0" 00:16:41.718 ], 00:16:41.718 "product_name": "Logical Volume", 00:16:41.718 "block_size": 4096, 00:16:41.718 "num_blocks": 26476544, 00:16:41.718 "uuid": "6c3709ad-543b-4403-8168-46825a7e39be", 00:16:41.718 "assigned_rate_limits": { 00:16:41.718 "rw_ios_per_sec": 0, 00:16:41.718 "rw_mbytes_per_sec": 0, 00:16:41.718 "r_mbytes_per_sec": 0, 00:16:41.718 "w_mbytes_per_sec": 0 00:16:41.718 }, 00:16:41.718 "claimed": false, 00:16:41.718 "zoned": false, 00:16:41.718 "supported_io_types": { 00:16:41.718 "read": true, 00:16:41.718 "write": true, 00:16:41.718 "unmap": true, 00:16:41.718 "flush": false, 00:16:41.718 "reset": true, 00:16:41.718 "nvme_admin": false, 00:16:41.718 "nvme_io": false, 00:16:41.718 "nvme_io_md": false, 00:16:41.718 "write_zeroes": true, 00:16:41.718 "zcopy": false, 00:16:41.718 "get_zone_info": false, 00:16:41.718 "zone_management": false, 00:16:41.718 "zone_append": false, 00:16:41.718 "compare": false, 00:16:41.718 "compare_and_write": false, 00:16:41.718 "abort": false, 00:16:41.718 "seek_hole": true, 00:16:41.718 "seek_data": true, 00:16:41.718 "copy": false, 00:16:41.718 "nvme_iov_md": false 00:16:41.718 }, 00:16:41.718 "driver_specific": { 00:16:41.718 "lvol": { 00:16:41.718 "lvol_store_uuid": "7b318b06-58c0-48c6-8115-6e36e6847ed9", 00:16:41.718 "base_bdev": "nvme0n1", 00:16:41.718 "thin_provision": true, 00:16:41.718 "num_allocated_clusters": 0, 00:16:41.718 "snapshot": false, 00:16:41.718 "clone": false, 00:16:41.718 "esnap_clone": false 00:16:41.718 } 00:16:41.718 } 00:16:41.718 } 00:16:41.718 ]' 00:16:41.718 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:41.718 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:41.718 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:41.718 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:41.718 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:41.718 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:41.718 12:58:33 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:41.718 12:58:33 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:41.718 12:58:33 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:42.300 12:58:33 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:42.300 12:58:33 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:42.300 12:58:33 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 6c3709ad-543b-4403-8168-46825a7e39be 00:16:42.300 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=6c3709ad-543b-4403-8168-46825a7e39be 00:16:42.300 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:42.300 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:42.300 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:42.300 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c3709ad-543b-4403-8168-46825a7e39be 00:16:42.300 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:42.300 { 00:16:42.300 "name": "6c3709ad-543b-4403-8168-46825a7e39be", 00:16:42.300 "aliases": [ 00:16:42.300 "lvs/nvme0n1p0" 00:16:42.300 ], 00:16:42.300 "product_name": "Logical Volume", 00:16:42.300 "block_size": 4096, 00:16:42.300 "num_blocks": 26476544, 00:16:42.300 "uuid": "6c3709ad-543b-4403-8168-46825a7e39be", 00:16:42.300 "assigned_rate_limits": { 00:16:42.300 "rw_ios_per_sec": 0, 00:16:42.300 "rw_mbytes_per_sec": 0, 00:16:42.300 "r_mbytes_per_sec": 0, 00:16:42.300 "w_mbytes_per_sec": 0 00:16:42.300 }, 00:16:42.300 "claimed": false, 00:16:42.300 "zoned": false, 00:16:42.300 "supported_io_types": { 00:16:42.300 "read": true, 00:16:42.300 "write": true, 00:16:42.300 "unmap": true, 00:16:42.300 "flush": false, 00:16:42.300 "reset": true, 00:16:42.300 "nvme_admin": false, 00:16:42.300 "nvme_io": false, 00:16:42.300 "nvme_io_md": false, 00:16:42.300 "write_zeroes": true, 00:16:42.300 "zcopy": false, 00:16:42.300 "get_zone_info": false, 00:16:42.300 "zone_management": false, 00:16:42.300 "zone_append": false, 00:16:42.300 "compare": false, 00:16:42.300 "compare_and_write": false, 00:16:42.300 "abort": false, 00:16:42.300 "seek_hole": true, 00:16:42.300 "seek_data": true, 00:16:42.300 "copy": false, 00:16:42.300 "nvme_iov_md": false 00:16:42.300 }, 00:16:42.300 "driver_specific": { 00:16:42.300 "lvol": { 00:16:42.300 "lvol_store_uuid": "7b318b06-58c0-48c6-8115-6e36e6847ed9", 00:16:42.300 "base_bdev": "nvme0n1", 00:16:42.300 "thin_provision": true, 00:16:42.300 "num_allocated_clusters": 0, 00:16:42.300 "snapshot": false, 00:16:42.300 "clone": false, 00:16:42.300 "esnap_clone": false 00:16:42.300 } 00:16:42.300 } 00:16:42.300 } 00:16:42.300 ]' 00:16:42.558 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:42.558 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:42.558 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:42.558 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:42.558 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:42.558 12:58:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:42.558 12:58:33 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:42.558 12:58:33 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:42.817 12:58:34 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:42.817 12:58:34 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:42.817 12:58:34 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 6c3709ad-543b-4403-8168-46825a7e39be 00:16:42.817 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=6c3709ad-543b-4403-8168-46825a7e39be 00:16:42.817 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:42.817 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:42.817 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:42.817 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c3709ad-543b-4403-8168-46825a7e39be 00:16:43.076 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:43.076 { 00:16:43.076 "name": "6c3709ad-543b-4403-8168-46825a7e39be", 00:16:43.076 "aliases": [ 00:16:43.076 "lvs/nvme0n1p0" 00:16:43.076 ], 00:16:43.076 "product_name": "Logical Volume", 00:16:43.076 "block_size": 4096, 00:16:43.076 "num_blocks": 26476544, 00:16:43.076 "uuid": "6c3709ad-543b-4403-8168-46825a7e39be", 00:16:43.076 "assigned_rate_limits": { 00:16:43.076 "rw_ios_per_sec": 0, 00:16:43.076 "rw_mbytes_per_sec": 0, 00:16:43.076 "r_mbytes_per_sec": 0, 00:16:43.076 "w_mbytes_per_sec": 0 00:16:43.076 }, 00:16:43.076 "claimed": false, 00:16:43.076 "zoned": false, 00:16:43.076 "supported_io_types": { 00:16:43.076 "read": true, 00:16:43.076 "write": true, 00:16:43.076 "unmap": true, 00:16:43.076 "flush": false, 00:16:43.076 "reset": true, 00:16:43.076 "nvme_admin": false, 00:16:43.076 "nvme_io": false, 00:16:43.076 "nvme_io_md": false, 00:16:43.076 "write_zeroes": true, 00:16:43.076 "zcopy": false, 00:16:43.076 "get_zone_info": false, 00:16:43.076 "zone_management": false, 00:16:43.076 "zone_append": false, 00:16:43.076 "compare": false, 00:16:43.076 "compare_and_write": false, 00:16:43.076 "abort": false, 00:16:43.076 "seek_hole": true, 00:16:43.076 "seek_data": true, 00:16:43.076 "copy": false, 00:16:43.076 "nvme_iov_md": false 00:16:43.076 }, 00:16:43.076 "driver_specific": { 00:16:43.076 "lvol": { 00:16:43.076 "lvol_store_uuid": "7b318b06-58c0-48c6-8115-6e36e6847ed9", 00:16:43.076 "base_bdev": "nvme0n1", 00:16:43.076 "thin_provision": true, 00:16:43.076 "num_allocated_clusters": 0, 00:16:43.076 "snapshot": false, 00:16:43.076 "clone": false, 00:16:43.076 "esnap_clone": false 00:16:43.076 } 00:16:43.076 } 00:16:43.076 } 00:16:43.076 ]' 00:16:43.076 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:43.076 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:43.076 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:43.076 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:43.076 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:43.076 12:58:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:43.076 12:58:34 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:43.076 12:58:34 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6c3709ad-543b-4403-8168-46825a7e39be -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:43.336 [2024-08-11 12:58:34.796526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.796755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:43.336 [2024-08-11 12:58:34.796793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:43.336 [2024-08-11 12:58:34.796808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.799454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.799496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.336 [2024-08-11 12:58:34.799531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:16:43.336 [2024-08-11 12:58:34.799543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.799691] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:43.336 [2024-08-11 12:58:34.800165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:43.336 [2024-08-11 12:58:34.800451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.800580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.336 [2024-08-11 12:58:34.800719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:16:43.336 [2024-08-11 12:58:34.800743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.800988] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d72816ad-f32f-44ac-983e-8aec6b187201 00:16:43.336 [2024-08-11 12:58:34.801975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.802016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:43.336 [2024-08-11 12:58:34.802033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:43.336 [2024-08-11 12:58:34.802048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.806578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.806640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.336 [2024-08-11 12:58:34.806657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.423 ms 00:16:43.336 [2024-08-11 12:58:34.806671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.806818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.806846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.336 [2024-08-11 12:58:34.806860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:43.336 [2024-08-11 12:58:34.806927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.807004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.807026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:43.336 [2024-08-11 12:58:34.807040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:43.336 [2024-08-11 12:58:34.807054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.807101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:43.336 [2024-08-11 12:58:34.808684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.808718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.336 [2024-08-11 12:58:34.808752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:16:43.336 [2024-08-11 12:58:34.808763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.808815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.808831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:43.336 [2024-08-11 12:58:34.808844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:43.336 [2024-08-11 12:58:34.808855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.808932] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:43.336 [2024-08-11 12:58:34.809089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:43.336 [2024-08-11 12:58:34.809116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:43.336 [2024-08-11 12:58:34.809130] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:43.336 [2024-08-11 12:58:34.809147] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:43.336 [2024-08-11 12:58:34.809160] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:43.336 [2024-08-11 12:58:34.809195] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:43.336 [2024-08-11 12:58:34.809206] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:43.336 [2024-08-11 12:58:34.809234] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:43.336 [2024-08-11 12:58:34.809245] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:43.336 [2024-08-11 12:58:34.809273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.809285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:43.336 [2024-08-11 12:58:34.809313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:16:43.336 [2024-08-11 12:58:34.809336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.809436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.336 [2024-08-11 12:58:34.809450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:43.336 [2024-08-11 12:58:34.809466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:43.336 [2024-08-11 12:58:34.809479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.336 [2024-08-11 12:58:34.809596] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:43.336 [2024-08-11 12:58:34.809611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:43.336 [2024-08-11 12:58:34.809625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.336 [2024-08-11 12:58:34.809655] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.336 [2024-08-11 12:58:34.809670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:43.336 [2024-08-11 12:58:34.809681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:43.336 [2024-08-11 12:58:34.809694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:43.336 [2024-08-11 12:58:34.809704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:43.336 [2024-08-11 12:58:34.809716] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:43.336 [2024-08-11 12:58:34.809726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.336 [2024-08-11 12:58:34.809738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:43.336 [2024-08-11 12:58:34.809749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:43.336 [2024-08-11 12:58:34.809761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.336 [2024-08-11 12:58:34.809771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:43.336 [2024-08-11 12:58:34.809785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:43.336 [2024-08-11 12:58:34.809796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.336 [2024-08-11 12:58:34.809807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:43.336 [2024-08-11 12:58:34.809818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:43.336 [2024-08-11 12:58:34.809829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.336 [2024-08-11 12:58:34.809840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:43.336 [2024-08-11 12:58:34.809855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:43.336 [2024-08-11 12:58:34.809865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.336 [2024-08-11 12:58:34.809893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:43.336 [2024-08-11 12:58:34.809904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:43.336 [2024-08-11 12:58:34.809917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.336 [2024-08-11 12:58:34.809927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:43.336 [2024-08-11 12:58:34.809938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:43.336 [2024-08-11 12:58:34.809949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.336 [2024-08-11 12:58:34.809980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:43.337 [2024-08-11 12:58:34.809992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:43.337 [2024-08-11 12:58:34.810006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.337 [2024-08-11 12:58:34.810016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:43.337 [2024-08-11 12:58:34.810028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:43.337 [2024-08-11 12:58:34.810038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.337 [2024-08-11 12:58:34.810050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:43.337 [2024-08-11 12:58:34.810060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:43.337 [2024-08-11 12:58:34.810072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.337 [2024-08-11 12:58:34.810082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:43.337 [2024-08-11 12:58:34.810095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:43.337 [2024-08-11 12:58:34.810105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.337 [2024-08-11 12:58:34.810117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:43.337 [2024-08-11 12:58:34.810128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:43.337 [2024-08-11 12:58:34.810139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.337 [2024-08-11 12:58:34.810149] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:43.337 [2024-08-11 12:58:34.810162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:43.337 [2024-08-11 12:58:34.810173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.337 [2024-08-11 12:58:34.810187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.337 [2024-08-11 12:58:34.810202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:43.337 [2024-08-11 12:58:34.810215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:43.337 [2024-08-11 12:58:34.810224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:43.337 [2024-08-11 12:58:34.810236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:43.337 [2024-08-11 12:58:34.810246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:43.337 [2024-08-11 12:58:34.810261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:43.337 [2024-08-11 12:58:34.810277] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:43.337 [2024-08-11 12:58:34.810300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.337 [2024-08-11 12:58:34.810313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:43.337 [2024-08-11 12:58:34.810326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:43.337 [2024-08-11 12:58:34.810337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:43.337 [2024-08-11 12:58:34.810350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:43.337 [2024-08-11 12:58:34.810360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:43.337 [2024-08-11 12:58:34.810373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:43.337 [2024-08-11 12:58:34.810384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:43.337 [2024-08-11 12:58:34.810398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:43.337 [2024-08-11 12:58:34.810409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:43.337 [2024-08-11 12:58:34.810422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:43.337 [2024-08-11 12:58:34.810432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:43.337 [2024-08-11 12:58:34.810445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:43.337 [2024-08-11 12:58:34.810456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:43.337 [2024-08-11 12:58:34.810469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:43.337 [2024-08-11 12:58:34.810479] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:43.337 [2024-08-11 12:58:34.810493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.337 [2024-08-11 12:58:34.810505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:43.337 [2024-08-11 12:58:34.810518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:43.337 [2024-08-11 12:58:34.810529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:43.337 [2024-08-11 12:58:34.810542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:43.337 [2024-08-11 12:58:34.810553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.337 [2024-08-11 12:58:34.810568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:43.337 [2024-08-11 12:58:34.810580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:16:43.337 [2024-08-11 12:58:34.810595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.337 [2024-08-11 12:58:34.810679] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:43.337 [2024-08-11 12:58:34.810715] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:45.867 [2024-08-11 12:58:36.895700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.867 [2024-08-11 12:58:36.895774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:45.867 [2024-08-11 12:58:36.895811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2085.032 ms 00:16:45.867 [2024-08-11 12:58:36.895825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.867 [2024-08-11 12:58:36.902856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.867 [2024-08-11 12:58:36.902959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.867 [2024-08-11 12:58:36.902979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.847 ms 00:16:45.867 [2024-08-11 12:58:36.903002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.867 [2024-08-11 12:58:36.903171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.867 [2024-08-11 12:58:36.903198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.867 [2024-08-11 12:58:36.903212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:45.867 [2024-08-11 12:58:36.903265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.867 [2024-08-11 12:58:36.924945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.867 [2024-08-11 12:58:36.925033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.867 [2024-08-11 12:58:36.925061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.635 ms 00:16:45.867 [2024-08-11 12:58:36.925110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.867 [2024-08-11 12:58:36.925301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.867 [2024-08-11 12:58:36.925334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.867 [2024-08-11 12:58:36.925377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.867 [2024-08-11 12:58:36.925399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.867 [2024-08-11 12:58:36.925821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.867 [2024-08-11 12:58:36.925861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.867 [2024-08-11 12:58:36.925901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:16:45.867 [2024-08-11 12:58:36.925933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.867 [2024-08-11 12:58:36.926163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.867 [2024-08-11 12:58:36.926208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.867 [2024-08-11 12:58:36.926247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:16:45.867 [2024-08-11 12:58:36.926268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.867 [2024-08-11 12:58:36.933254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.867 [2024-08-11 12:58:36.933345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.867 [2024-08-11 12:58:36.933362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.928 ms 00:16:45.867 [2024-08-11 12:58:36.933375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.867 [2024-08-11 12:58:36.941580] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:45.867 [2024-08-11 12:58:36.954852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:36.954954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:45.868 [2024-08-11 12:58:36.954976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.344 ms 00:16:45.868 [2024-08-11 12:58:36.955008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.005103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.005171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:45.868 [2024-08-11 12:58:37.005224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.958 ms 00:16:45.868 [2024-08-11 12:58:37.005237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.005536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.005557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:45.868 [2024-08-11 12:58:37.005573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:16:45.868 [2024-08-11 12:58:37.005602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.009331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.009371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:45.868 [2024-08-11 12:58:37.009410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:16:45.868 [2024-08-11 12:58:37.009421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.012749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.012951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:45.868 [2024-08-11 12:58:37.012984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:16:45.868 [2024-08-11 12:58:37.012997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.013355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.013376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:45.868 [2024-08-11 12:58:37.013391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:16:45.868 [2024-08-11 12:58:37.013432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.044658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.044946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:45.868 [2024-08-11 12:58:37.044986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.179 ms 00:16:45.868 [2024-08-11 12:58:37.045017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.049328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.049377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:45.868 [2024-08-11 12:58:37.049413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.244 ms 00:16:45.868 [2024-08-11 12:58:37.049424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.052946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.052984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:45.868 [2024-08-11 12:58:37.053018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.460 ms 00:16:45.868 [2024-08-11 12:58:37.053029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.057084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.057124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:45.868 [2024-08-11 12:58:37.057159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:16:45.868 [2024-08-11 12:58:37.057171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.057265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.057300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:45.868 [2024-08-11 12:58:37.057315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:45.868 [2024-08-11 12:58:37.057326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.057436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.868 [2024-08-11 12:58:37.057453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:45.868 [2024-08-11 12:58:37.057472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:45.868 [2024-08-11 12:58:37.057486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.868 [2024-08-11 12:58:37.058564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.868 [2024-08-11 12:58:37.059787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2261.682 ms, result 0 00:16:45.868 [2024-08-11 12:58:37.060751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.868 { 00:16:45.868 "name": "ftl0", 00:16:45.868 "uuid": "d72816ad-f32f-44ac-983e-8aec6b187201" 00:16:45.868 } 00:16:45.868 12:58:37 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:45.868 12:58:37 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:16:45.868 12:58:37 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:45.868 12:58:37 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:16:45.868 12:58:37 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:45.868 12:58:37 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:45.868 12:58:37 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:45.868 12:58:37 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:46.127 [ 00:16:46.127 { 00:16:46.127 "name": "ftl0", 00:16:46.127 "aliases": [ 00:16:46.127 "d72816ad-f32f-44ac-983e-8aec6b187201" 00:16:46.127 ], 00:16:46.127 "product_name": "FTL disk", 00:16:46.127 "block_size": 4096, 00:16:46.127 "num_blocks": 23592960, 00:16:46.127 "uuid": "d72816ad-f32f-44ac-983e-8aec6b187201", 00:16:46.127 "assigned_rate_limits": { 00:16:46.127 "rw_ios_per_sec": 0, 00:16:46.127 "rw_mbytes_per_sec": 0, 00:16:46.127 "r_mbytes_per_sec": 0, 00:16:46.127 "w_mbytes_per_sec": 0 00:16:46.127 }, 00:16:46.127 "claimed": false, 00:16:46.127 "zoned": false, 00:16:46.127 "supported_io_types": { 00:16:46.127 "read": true, 00:16:46.127 "write": true, 00:16:46.127 "unmap": true, 00:16:46.127 "flush": true, 00:16:46.127 "reset": false, 00:16:46.127 "nvme_admin": false, 00:16:46.127 "nvme_io": false, 00:16:46.127 "nvme_io_md": false, 00:16:46.127 "write_zeroes": true, 00:16:46.127 "zcopy": false, 00:16:46.127 "get_zone_info": false, 00:16:46.127 "zone_management": false, 00:16:46.127 "zone_append": false, 00:16:46.127 "compare": false, 00:16:46.127 "compare_and_write": false, 00:16:46.127 "abort": false, 00:16:46.127 "seek_hole": false, 00:16:46.127 "seek_data": false, 00:16:46.127 "copy": false, 00:16:46.127 "nvme_iov_md": false 00:16:46.127 }, 00:16:46.127 "driver_specific": { 00:16:46.127 "ftl": { 00:16:46.127 "base_bdev": "6c3709ad-543b-4403-8168-46825a7e39be", 00:16:46.127 "cache": "nvc0n1p0" 00:16:46.127 } 00:16:46.127 } 00:16:46.127 } 00:16:46.127 ] 00:16:46.127 12:58:37 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:16:46.127 12:58:37 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:46.127 12:58:37 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:46.386 12:58:37 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:46.386 12:58:37 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:46.645 12:58:38 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:46.645 { 00:16:46.645 "name": "ftl0", 00:16:46.645 "aliases": [ 00:16:46.645 "d72816ad-f32f-44ac-983e-8aec6b187201" 00:16:46.645 ], 00:16:46.645 "product_name": "FTL disk", 00:16:46.645 "block_size": 4096, 00:16:46.645 "num_blocks": 23592960, 00:16:46.645 "uuid": "d72816ad-f32f-44ac-983e-8aec6b187201", 00:16:46.645 "assigned_rate_limits": { 00:16:46.645 "rw_ios_per_sec": 0, 00:16:46.645 "rw_mbytes_per_sec": 0, 00:16:46.645 "r_mbytes_per_sec": 0, 00:16:46.645 "w_mbytes_per_sec": 0 00:16:46.645 }, 00:16:46.645 "claimed": false, 00:16:46.645 "zoned": false, 00:16:46.645 "supported_io_types": { 00:16:46.645 "read": true, 00:16:46.645 "write": true, 00:16:46.645 "unmap": true, 00:16:46.645 "flush": true, 00:16:46.645 "reset": false, 00:16:46.645 "nvme_admin": false, 00:16:46.645 "nvme_io": false, 00:16:46.645 "nvme_io_md": false, 00:16:46.645 "write_zeroes": true, 00:16:46.645 "zcopy": false, 00:16:46.645 "get_zone_info": false, 00:16:46.645 "zone_management": false, 00:16:46.645 "zone_append": false, 00:16:46.645 "compare": false, 00:16:46.645 "compare_and_write": false, 00:16:46.645 "abort": false, 00:16:46.645 "seek_hole": false, 00:16:46.645 "seek_data": false, 00:16:46.645 "copy": false, 00:16:46.645 "nvme_iov_md": false 00:16:46.645 }, 00:16:46.645 "driver_specific": { 00:16:46.645 "ftl": { 00:16:46.645 "base_bdev": "6c3709ad-543b-4403-8168-46825a7e39be", 00:16:46.645 "cache": "nvc0n1p0" 00:16:46.645 } 00:16:46.645 } 00:16:46.645 } 00:16:46.645 ]' 00:16:46.645 12:58:38 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:46.904 12:58:38 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:46.904 12:58:38 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:46.904 [2024-08-11 12:58:38.476643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.904 [2024-08-11 12:58:38.476957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:46.904 [2024-08-11 12:58:38.477093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:46.904 [2024-08-11 12:58:38.477237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.904 [2024-08-11 12:58:38.477309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:46.904 [2024-08-11 12:58:38.477764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.904 [2024-08-11 12:58:38.477785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:46.904 [2024-08-11 12:58:38.477801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:16:46.904 [2024-08-11 12:58:38.477814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.904 [2024-08-11 12:58:38.478404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.904 [2024-08-11 12:58:38.478440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:46.904 [2024-08-11 12:58:38.478479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:16:46.904 [2024-08-11 12:58:38.478508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.904 [2024-08-11 12:58:38.482136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.904 [2024-08-11 12:58:38.482167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:46.904 [2024-08-11 12:58:38.482185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:16:46.904 [2024-08-11 12:58:38.482198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.904 [2024-08-11 12:58:38.489316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.904 [2024-08-11 12:58:38.489366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:46.904 [2024-08-11 12:58:38.489400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.040 ms 00:16:46.904 [2024-08-11 12:58:38.489412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.904 [2024-08-11 12:58:38.490780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.904 [2024-08-11 12:58:38.490837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:46.904 [2024-08-11 12:58:38.490874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:16:46.904 [2024-08-11 12:58:38.490905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.904 [2024-08-11 12:58:38.495285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.904 [2024-08-11 12:58:38.495354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:46.905 [2024-08-11 12:58:38.495389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:16:46.905 [2024-08-11 12:58:38.495400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.905 [2024-08-11 12:58:38.495584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.905 [2024-08-11 12:58:38.495603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:46.905 [2024-08-11 12:58:38.495617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:16:46.905 [2024-08-11 12:58:38.495628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.905 [2024-08-11 12:58:38.497726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.905 [2024-08-11 12:58:38.497764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:46.905 [2024-08-11 12:58:38.497801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:16:46.905 [2024-08-11 12:58:38.497812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.905 [2024-08-11 12:58:38.499433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.905 [2024-08-11 12:58:38.499624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:46.905 [2024-08-11 12:58:38.499671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:16:46.905 [2024-08-11 12:58:38.499684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.165 [2024-08-11 12:58:38.501115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.165 [2024-08-11 12:58:38.501169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:47.165 [2024-08-11 12:58:38.501205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:16:47.165 [2024-08-11 12:58:38.501217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.165 [2024-08-11 12:58:38.502564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.165 [2024-08-11 12:58:38.502602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:47.165 [2024-08-11 12:58:38.502636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:16:47.165 [2024-08-11 12:58:38.502647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.165 [2024-08-11 12:58:38.502740] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:47.165 [2024-08-11 12:58:38.502764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.502780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.502793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.502809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.502821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.502834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.502846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.502859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.502871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.503987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:47.165 [2024-08-11 12:58:38.504112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:47.166 [2024-08-11 12:58:38.504722] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:47.166 [2024-08-11 12:58:38.504756] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72816ad-f32f-44ac-983e-8aec6b187201 00:16:47.166 [2024-08-11 12:58:38.504770] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:47.166 [2024-08-11 12:58:38.504784] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:47.166 [2024-08-11 12:58:38.504796] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:47.166 [2024-08-11 12:58:38.504811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:47.166 [2024-08-11 12:58:38.504822] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:47.166 [2024-08-11 12:58:38.504836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:47.166 [2024-08-11 12:58:38.504853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:47.166 [2024-08-11 12:58:38.504884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:47.166 [2024-08-11 12:58:38.504899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:47.166 [2024-08-11 12:58:38.504918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.166 [2024-08-11 12:58:38.504931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:47.166 [2024-08-11 12:58:38.504946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:16:47.166 [2024-08-11 12:58:38.504958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.506425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.166 [2024-08-11 12:58:38.506579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:47.166 [2024-08-11 12:58:38.506610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:16:47.166 [2024-08-11 12:58:38.506624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.506743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.166 [2024-08-11 12:58:38.506762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:47.166 [2024-08-11 12:58:38.506778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:47.166 [2024-08-11 12:58:38.506792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.512397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.512442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.166 [2024-08-11 12:58:38.512477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.512490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.512608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.512626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.166 [2024-08-11 12:58:38.512642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.512673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.512851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.512888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.166 [2024-08-11 12:58:38.512946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.512959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.513015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.513030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.166 [2024-08-11 12:58:38.513045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.513057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.521253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.521330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.166 [2024-08-11 12:58:38.521366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.521378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.527935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.528183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.166 [2024-08-11 12:58:38.528219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.528237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.528404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.528424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:47.166 [2024-08-11 12:58:38.528440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.528452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.528514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.528563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:47.166 [2024-08-11 12:58:38.528579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.528590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.528765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.528784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:47.166 [2024-08-11 12:58:38.528799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.166 [2024-08-11 12:58:38.528810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.166 [2024-08-11 12:58:38.528952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.166 [2024-08-11 12:58:38.528973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:47.166 [2024-08-11 12:58:38.528989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.167 [2024-08-11 12:58:38.529017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.167 [2024-08-11 12:58:38.529089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.167 [2024-08-11 12:58:38.529107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:47.167 [2024-08-11 12:58:38.529122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.167 [2024-08-11 12:58:38.529134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.167 [2024-08-11 12:58:38.529209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.167 [2024-08-11 12:58:38.529227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:47.167 [2024-08-11 12:58:38.529242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.167 [2024-08-11 12:58:38.529254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.167 [2024-08-11 12:58:38.529491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.796 ms, result 0 00:16:47.167 true 00:16:47.167 12:58:38 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 84575 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 84575 ']' 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 84575 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84575 00:16:47.167 killing process with pid 84575 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84575' 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 84575 00:16:47.167 12:58:38 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 84575 00:16:50.453 12:58:41 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:51.390 65536+0 records in 00:16:51.390 65536+0 records out 00:16:51.390 268435456 bytes (268 MB, 256 MiB) copied, 1.07337 s, 250 MB/s 00:16:51.390 12:58:42 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:51.390 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:51.390 [2024-08-11 12:58:42.732618] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:16:51.390 [2024-08-11 12:58:42.732793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84747 ] 00:16:51.390 [2024-08-11 12:58:42.883783] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.390 [2024-08-11 12:58:42.928746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.650 [2024-08-11 12:58:43.022514] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:51.650 [2024-08-11 12:58:43.022885] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:51.650 [2024-08-11 12:58:43.182709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.650 [2024-08-11 12:58:43.182786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:51.650 [2024-08-11 12:58:43.182830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:51.650 [2024-08-11 12:58:43.182842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.650 [2024-08-11 12:58:43.185541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.650 [2024-08-11 12:58:43.185585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.650 [2024-08-11 12:58:43.185627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:16:51.650 [2024-08-11 12:58:43.185648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.650 [2024-08-11 12:58:43.185777] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:51.650 [2024-08-11 12:58:43.186109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:51.650 [2024-08-11 12:58:43.186155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.650 [2024-08-11 12:58:43.186168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.650 [2024-08-11 12:58:43.186180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:16:51.650 [2024-08-11 12:58:43.186191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.650 [2024-08-11 12:58:43.187443] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:51.650 [2024-08-11 12:58:43.189620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.650 [2024-08-11 12:58:43.189676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:51.650 [2024-08-11 12:58:43.189709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:16:51.650 [2024-08-11 12:58:43.189719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.650 [2024-08-11 12:58:43.189810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.650 [2024-08-11 12:58:43.189829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:51.650 [2024-08-11 12:58:43.189845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:51.650 [2024-08-11 12:58:43.189856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.651 [2024-08-11 12:58:43.194622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.651 [2024-08-11 12:58:43.194663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.651 [2024-08-11 12:58:43.194694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.677 ms 00:16:51.651 [2024-08-11 12:58:43.194710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.651 [2024-08-11 12:58:43.194867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.651 [2024-08-11 12:58:43.194888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.651 [2024-08-11 12:58:43.194942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:16:51.651 [2024-08-11 12:58:43.194957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.651 [2024-08-11 12:58:43.194999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.651 [2024-08-11 12:58:43.195015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:51.651 [2024-08-11 12:58:43.195026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:51.651 [2024-08-11 12:58:43.195037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.651 [2024-08-11 12:58:43.195066] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:51.651 [2024-08-11 12:58:43.196509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.651 [2024-08-11 12:58:43.196720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.651 [2024-08-11 12:58:43.196747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:16:51.651 [2024-08-11 12:58:43.196759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.651 [2024-08-11 12:58:43.196835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.651 [2024-08-11 12:58:43.196863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:51.651 [2024-08-11 12:58:43.196876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:51.651 [2024-08-11 12:58:43.196920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.651 [2024-08-11 12:58:43.196965] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:51.651 [2024-08-11 12:58:43.197003] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:51.651 [2024-08-11 12:58:43.197052] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:51.651 [2024-08-11 12:58:43.197075] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:16:51.651 [2024-08-11 12:58:43.197182] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:51.651 [2024-08-11 12:58:43.197199] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:51.651 [2024-08-11 12:58:43.197223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:51.651 [2024-08-11 12:58:43.197238] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:51.651 [2024-08-11 12:58:43.197255] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:51.651 [2024-08-11 12:58:43.197267] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:51.651 [2024-08-11 12:58:43.197277] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:51.651 [2024-08-11 12:58:43.197304] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:51.651 [2024-08-11 12:58:43.197314] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:51.651 [2024-08-11 12:58:43.197325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.651 [2024-08-11 12:58:43.197335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:51.651 [2024-08-11 12:58:43.197361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:16:51.651 [2024-08-11 12:58:43.197374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.651 [2024-08-11 12:58:43.197488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.651 [2024-08-11 12:58:43.197526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:51.651 [2024-08-11 12:58:43.197555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:51.651 [2024-08-11 12:58:43.197565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.651 [2024-08-11 12:58:43.197717] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:51.651 [2024-08-11 12:58:43.197745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:51.651 [2024-08-11 12:58:43.197758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.651 [2024-08-11 12:58:43.197769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.651 [2024-08-11 12:58:43.197789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:51.651 [2024-08-11 12:58:43.197800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:51.651 [2024-08-11 12:58:43.197810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:51.651 [2024-08-11 12:58:43.197827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:51.651 [2024-08-11 12:58:43.197839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:51.651 [2024-08-11 12:58:43.197853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.651 [2024-08-11 12:58:43.197890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:51.651 [2024-08-11 12:58:43.197908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:51.651 [2024-08-11 12:58:43.197919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.651 [2024-08-11 12:58:43.197929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:51.651 [2024-08-11 12:58:43.197950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:51.651 [2024-08-11 12:58:43.197960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.651 [2024-08-11 12:58:43.197970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:51.651 [2024-08-11 12:58:43.197980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:51.651 [2024-08-11 12:58:43.197990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.651 [2024-08-11 12:58:43.198000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:51.651 [2024-08-11 12:58:43.198010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:51.651 [2024-08-11 12:58:43.198020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.651 [2024-08-11 12:58:43.198030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:51.651 [2024-08-11 12:58:43.198048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:51.651 [2024-08-11 12:58:43.198063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.651 [2024-08-11 12:58:43.198074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:51.651 [2024-08-11 12:58:43.198098] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:51.651 [2024-08-11 12:58:43.198109] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.651 [2024-08-11 12:58:43.198126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:51.651 [2024-08-11 12:58:43.198141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:51.651 [2024-08-11 12:58:43.198154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.651 [2024-08-11 12:58:43.198174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:51.651 [2024-08-11 12:58:43.198188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:51.651 [2024-08-11 12:58:43.198204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.651 [2024-08-11 12:58:43.198214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:51.651 [2024-08-11 12:58:43.198228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:51.652 [2024-08-11 12:58:43.198238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.652 [2024-08-11 12:58:43.198248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:51.652 [2024-08-11 12:58:43.198259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:51.652 [2024-08-11 12:58:43.198268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.652 [2024-08-11 12:58:43.198278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:51.652 [2024-08-11 12:58:43.198288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:51.652 [2024-08-11 12:58:43.198301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.652 [2024-08-11 12:58:43.198312] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:51.652 [2024-08-11 12:58:43.198323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:51.652 [2024-08-11 12:58:43.198334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.652 [2024-08-11 12:58:43.198347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.652 [2024-08-11 12:58:43.198367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:51.652 [2024-08-11 12:58:43.198378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:51.652 [2024-08-11 12:58:43.198388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:51.652 [2024-08-11 12:58:43.198414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:51.652 [2024-08-11 12:58:43.198438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:51.652 [2024-08-11 12:58:43.198448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:51.652 [2024-08-11 12:58:43.198459] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:51.652 [2024-08-11 12:58:43.198472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.652 [2024-08-11 12:58:43.198486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:51.652 [2024-08-11 12:58:43.198496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:51.652 [2024-08-11 12:58:43.198507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:51.652 [2024-08-11 12:58:43.198521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:51.652 [2024-08-11 12:58:43.198547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:51.652 [2024-08-11 12:58:43.198557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:51.652 [2024-08-11 12:58:43.198567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:51.652 [2024-08-11 12:58:43.198577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:51.652 [2024-08-11 12:58:43.198587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:51.652 [2024-08-11 12:58:43.198607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:51.652 [2024-08-11 12:58:43.198617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:51.652 [2024-08-11 12:58:43.198627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:51.652 [2024-08-11 12:58:43.198638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:51.652 [2024-08-11 12:58:43.198648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:51.652 [2024-08-11 12:58:43.198675] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:51.652 [2024-08-11 12:58:43.198686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.652 [2024-08-11 12:58:43.198698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:51.652 [2024-08-11 12:58:43.198709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:51.652 [2024-08-11 12:58:43.198720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:51.652 [2024-08-11 12:58:43.198733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:51.652 [2024-08-11 12:58:43.198745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.652 [2024-08-11 12:58:43.198755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:51.652 [2024-08-11 12:58:43.198766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:16:51.652 [2024-08-11 12:58:43.198776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.652 [2024-08-11 12:58:43.219378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.652 [2024-08-11 12:58:43.219718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.652 [2024-08-11 12:58:43.219938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.483 ms 00:16:51.652 [2024-08-11 12:58:43.220014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.652 [2024-08-11 12:58:43.220539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.652 [2024-08-11 12:58:43.220734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:51.652 [2024-08-11 12:58:43.220936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:51.652 [2024-08-11 12:58:43.221095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.652 [2024-08-11 12:58:43.231200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.652 [2024-08-11 12:58:43.231426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.652 [2024-08-11 12:58:43.231534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.867 ms 00:16:51.652 [2024-08-11 12:58:43.231582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.652 [2024-08-11 12:58:43.231720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.652 [2024-08-11 12:58:43.231846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.652 [2024-08-11 12:58:43.231957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.652 [2024-08-11 12:58:43.232000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.652 [2024-08-11 12:58:43.232366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.652 [2024-08-11 12:58:43.232440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.652 [2024-08-11 12:58:43.232480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:16:51.652 [2024-08-11 12:58:43.232581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.652 [2024-08-11 12:58:43.232767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.652 [2024-08-11 12:58:43.232920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.652 [2024-08-11 12:58:43.233069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:16:51.652 [2024-08-11 12:58:43.233116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.652 [2024-08-11 12:58:43.238580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.652 [2024-08-11 12:58:43.238781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.652 [2024-08-11 12:58:43.238920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.389 ms 00:16:51.652 [2024-08-11 12:58:43.238972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.652 [2024-08-11 12:58:43.241471] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:51.653 [2024-08-11 12:58:43.241678] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:51.653 [2024-08-11 12:58:43.241836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.653 [2024-08-11 12:58:43.241968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:51.653 [2024-08-11 12:58:43.242021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.718 ms 00:16:51.653 [2024-08-11 12:58:43.242115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.911 [2024-08-11 12:58:43.259314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.911 [2024-08-11 12:58:43.259505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:51.911 [2024-08-11 12:58:43.259614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.102 ms 00:16:51.911 [2024-08-11 12:58:43.259672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.911 [2024-08-11 12:58:43.261908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.911 [2024-08-11 12:58:43.262101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:51.911 [2024-08-11 12:58:43.262212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:16:51.911 [2024-08-11 12:58:43.262261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.911 [2024-08-11 12:58:43.264053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.911 [2024-08-11 12:58:43.264202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:51.911 [2024-08-11 12:58:43.264367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:16:51.911 [2024-08-11 12:58:43.264431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.911 [2024-08-11 12:58:43.264920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.911 [2024-08-11 12:58:43.265091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:51.911 [2024-08-11 12:58:43.265214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:16:51.911 [2024-08-11 12:58:43.265263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.911 [2024-08-11 12:58:43.282531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.911 [2024-08-11 12:58:43.282839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:51.911 [2024-08-11 12:58:43.283030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.027 ms 00:16:51.911 [2024-08-11 12:58:43.283158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.911 [2024-08-11 12:58:43.291842] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:51.911 [2024-08-11 12:58:43.306466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.911 [2024-08-11 12:58:43.306540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:51.911 [2024-08-11 12:58:43.306584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.139 ms 00:16:51.911 [2024-08-11 12:58:43.306600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.911 [2024-08-11 12:58:43.306759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.911 [2024-08-11 12:58:43.306810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:51.911 [2024-08-11 12:58:43.306837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:51.911 [2024-08-11 12:58:43.306847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.911 [2024-08-11 12:58:43.306923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.912 [2024-08-11 12:58:43.306963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:51.912 [2024-08-11 12:58:43.307004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:51.912 [2024-08-11 12:58:43.307014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.912 [2024-08-11 12:58:43.307093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.912 [2024-08-11 12:58:43.307111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:51.912 [2024-08-11 12:58:43.307123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:51.912 [2024-08-11 12:58:43.307134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.912 [2024-08-11 12:58:43.307197] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:51.912 [2024-08-11 12:58:43.307214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.912 [2024-08-11 12:58:43.307225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:51.912 [2024-08-11 12:58:43.307237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:51.912 [2024-08-11 12:58:43.307247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.912 [2024-08-11 12:58:43.311101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.912 [2024-08-11 12:58:43.311153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:51.912 [2024-08-11 12:58:43.311170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.828 ms 00:16:51.912 [2024-08-11 12:58:43.311181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.912 [2024-08-11 12:58:43.311313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.912 [2024-08-11 12:58:43.311348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:51.912 [2024-08-11 12:58:43.311360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:51.912 [2024-08-11 12:58:43.311370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.912 [2024-08-11 12:58:43.312447] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.912 [2024-08-11 12:58:43.313692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.401 ms, result 0 00:16:51.912 [2024-08-11 12:58:43.314389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:51.912 [2024-08-11 12:58:43.323786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.996  Copying: 22/256 [MB] (22 MBps) Copying: 44/256 [MB] (21 MBps) Copying: 67/256 [MB] (22 MBps) Copying: 91/256 [MB] (24 MBps) Copying: 115/256 [MB] (23 MBps) Copying: 138/256 [MB] (23 MBps) Copying: 160/256 [MB] (22 MBps) Copying: 185/256 [MB] (24 MBps) Copying: 208/256 [MB] (22 MBps) Copying: 231/256 [MB] (22 MBps) Copying: 253/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-08-11 12:58:54.428486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.996 [2024-08-11 12:58:54.429721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.429889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:02.996 [2024-08-11 12:58:54.430015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:02.996 [2024-08-11 12:58:54.430039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.430078] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:02.996 [2024-08-11 12:58:54.430483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.430512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:02.996 [2024-08-11 12:58:54.430527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:17:02.996 [2024-08-11 12:58:54.430545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.432064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.432122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:02.996 [2024-08-11 12:58:54.432150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:17:02.996 [2024-08-11 12:58:54.432170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.438974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.439028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:02.996 [2024-08-11 12:58:54.439044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.765 ms 00:17:02.996 [2024-08-11 12:58:54.439066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.447132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.447169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:02.996 [2024-08-11 12:58:54.447200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.997 ms 00:17:02.996 [2024-08-11 12:58:54.447227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.448515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.448571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:02.996 [2024-08-11 12:58:54.448604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:17:02.996 [2024-08-11 12:58:54.448614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.451677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.451720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:02.996 [2024-08-11 12:58:54.451751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:17:02.996 [2024-08-11 12:58:54.451771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.451962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.451996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:02.996 [2024-08-11 12:58:54.452018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:17:02.996 [2024-08-11 12:58:54.452029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.454116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.454170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:02.996 [2024-08-11 12:58:54.454202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:17:02.996 [2024-08-11 12:58:54.454212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.455709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.455747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:02.996 [2024-08-11 12:58:54.455779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:17:02.996 [2024-08-11 12:58:54.455789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.457026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.457192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:02.996 [2024-08-11 12:58:54.457218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:17:02.996 [2024-08-11 12:58:54.457231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.458372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.996 [2024-08-11 12:58:54.458414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:02.996 [2024-08-11 12:58:54.458429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:17:02.996 [2024-08-11 12:58:54.458439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.996 [2024-08-11 12:58:54.458480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:02.996 [2024-08-11 12:58:54.458503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:02.996 [2024-08-11 12:58:54.458642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.458997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:02.997 [2024-08-11 12:58:54.459833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:02.997 [2024-08-11 12:58:54.459844] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72816ad-f32f-44ac-983e-8aec6b187201 00:17:02.997 [2024-08-11 12:58:54.459889] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:02.997 [2024-08-11 12:58:54.459923] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:02.997 [2024-08-11 12:58:54.459942] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:02.997 [2024-08-11 12:58:54.459953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:02.997 [2024-08-11 12:58:54.459970] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:02.998 [2024-08-11 12:58:54.459981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:02.998 [2024-08-11 12:58:54.459992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:02.998 [2024-08-11 12:58:54.460002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:02.998 [2024-08-11 12:58:54.460012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:02.998 [2024-08-11 12:58:54.460023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.998 [2024-08-11 12:58:54.460034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:02.998 [2024-08-11 12:58:54.460046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:17:02.998 [2024-08-11 12:58:54.460057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.461453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.998 [2024-08-11 12:58:54.461512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:02.998 [2024-08-11 12:58:54.461525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:17:02.998 [2024-08-11 12:58:54.461535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.461613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.998 [2024-08-11 12:58:54.461627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:02.998 [2024-08-11 12:58:54.461639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:02.998 [2024-08-11 12:58:54.461660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.466729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.466769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.998 [2024-08-11 12:58:54.466782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.466808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.466905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.466919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.998 [2024-08-11 12:58:54.466930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.466940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.466994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.467054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.998 [2024-08-11 12:58:54.467067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.467077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.467101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.467113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.998 [2024-08-11 12:58:54.467143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.467153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.476454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.476762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.998 [2024-08-11 12:58:54.476791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.476804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.484360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.484411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.998 [2024-08-11 12:58:54.484444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.484454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.484535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.484556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.998 [2024-08-11 12:58:54.484577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.484587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.484619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.484633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.998 [2024-08-11 12:58:54.484659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.484669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.484770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.484804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.998 [2024-08-11 12:58:54.484822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.484833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.484897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.484954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:02.998 [2024-08-11 12:58:54.484969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.484979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.485032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.485058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.998 [2024-08-11 12:58:54.485070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.485084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.485135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.998 [2024-08-11 12:58:54.485161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.998 [2024-08-11 12:58:54.485188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.998 [2024-08-11 12:58:54.485213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.998 [2024-08-11 12:58:54.485395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.634 ms, result 0 00:17:03.317 00:17:03.317 00:17:03.317 12:58:54 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=84878 00:17:03.317 12:58:54 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:03.317 12:58:54 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 84878 00:17:03.317 12:58:54 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 84878 ']' 00:17:03.317 12:58:54 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.317 12:58:54 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:03.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.317 12:58:54 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.317 12:58:54 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:03.317 12:58:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:03.583 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:03.583 [2024-08-11 12:58:54.960786] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:03.583 [2024-08-11 12:58:54.961123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84878 ] 00:17:03.583 [2024-08-11 12:58:55.108424] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.583 [2024-08-11 12:58:55.147042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.842 12:58:55 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:03.842 12:58:55 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:03.842 12:58:55 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:04.101 [2024-08-11 12:58:55.589072] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.101 [2024-08-11 12:58:55.589135] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.361 [2024-08-11 12:58:55.763044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.763094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:04.361 [2024-08-11 12:58:55.763132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:04.361 [2024-08-11 12:58:55.763144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.361 [2024-08-11 12:58:55.765526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.765564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.361 [2024-08-11 12:58:55.765600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.356 ms 00:17:04.361 [2024-08-11 12:58:55.765611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.361 [2024-08-11 12:58:55.765736] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:04.361 [2024-08-11 12:58:55.766091] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:04.361 [2024-08-11 12:58:55.766128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.766142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.361 [2024-08-11 12:58:55.766157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:04.361 [2024-08-11 12:58:55.766170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.361 [2024-08-11 12:58:55.767564] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:04.361 [2024-08-11 12:58:55.769754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.769803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:04.361 [2024-08-11 12:58:55.769835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:17:04.361 [2024-08-11 12:58:55.769848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.361 [2024-08-11 12:58:55.769974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.769999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:04.361 [2024-08-11 12:58:55.770012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:04.361 [2024-08-11 12:58:55.770030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.361 [2024-08-11 12:58:55.774195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.774257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.361 [2024-08-11 12:58:55.774273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.108 ms 00:17:04.361 [2024-08-11 12:58:55.774304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.361 [2024-08-11 12:58:55.774454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.774477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.361 [2024-08-11 12:58:55.774489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:04.361 [2024-08-11 12:58:55.774511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.361 [2024-08-11 12:58:55.774547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.774573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:04.361 [2024-08-11 12:58:55.774585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:04.361 [2024-08-11 12:58:55.774597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.361 [2024-08-11 12:58:55.774632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:04.361 [2024-08-11 12:58:55.775953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.361 [2024-08-11 12:58:55.776182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.361 [2024-08-11 12:58:55.776245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:17:04.361 [2024-08-11 12:58:55.776259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.362 [2024-08-11 12:58:55.776309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.362 [2024-08-11 12:58:55.776324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:04.362 [2024-08-11 12:58:55.776338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:04.362 [2024-08-11 12:58:55.776349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.362 [2024-08-11 12:58:55.776382] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:04.362 [2024-08-11 12:58:55.776408] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:04.362 [2024-08-11 12:58:55.776457] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:04.362 [2024-08-11 12:58:55.776497] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:04.362 [2024-08-11 12:58:55.776604] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:04.362 [2024-08-11 12:58:55.776619] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:04.362 [2024-08-11 12:58:55.776651] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:04.362 [2024-08-11 12:58:55.776664] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:04.362 [2024-08-11 12:58:55.776689] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:04.362 [2024-08-11 12:58:55.776701] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:04.362 [2024-08-11 12:58:55.776715] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:04.362 [2024-08-11 12:58:55.776725] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:04.362 [2024-08-11 12:58:55.776737] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:04.362 [2024-08-11 12:58:55.776749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.362 [2024-08-11 12:58:55.776761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:04.362 [2024-08-11 12:58:55.776772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:17:04.362 [2024-08-11 12:58:55.776784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.362 [2024-08-11 12:58:55.776862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.362 [2024-08-11 12:58:55.776882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:04.362 [2024-08-11 12:58:55.776893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:04.362 [2024-08-11 12:58:55.776916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.362 [2024-08-11 12:58:55.777045] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:04.362 [2024-08-11 12:58:55.777082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:04.362 [2024-08-11 12:58:55.777095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:04.362 [2024-08-11 12:58:55.777133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:04.362 [2024-08-11 12:58:55.777167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.362 [2024-08-11 12:58:55.777189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:04.362 [2024-08-11 12:58:55.777201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:04.362 [2024-08-11 12:58:55.777212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.362 [2024-08-11 12:58:55.777224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:04.362 [2024-08-11 12:58:55.777234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:04.362 [2024-08-11 12:58:55.777245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:04.362 [2024-08-11 12:58:55.777268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777278] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:04.362 [2024-08-11 12:58:55.777300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:04.362 [2024-08-11 12:58:55.777353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:04.362 [2024-08-11 12:58:55.777401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:04.362 [2024-08-11 12:58:55.777450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:04.362 [2024-08-11 12:58:55.777484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.362 [2024-08-11 12:58:55.777505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:04.362 [2024-08-11 12:58:55.777519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:04.362 [2024-08-11 12:58:55.777530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.362 [2024-08-11 12:58:55.777544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:04.362 [2024-08-11 12:58:55.777555] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:04.362 [2024-08-11 12:58:55.777567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:04.362 [2024-08-11 12:58:55.777590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:04.362 [2024-08-11 12:58:55.777600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777612] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:04.362 [2024-08-11 12:58:55.777623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:04.362 [2024-08-11 12:58:55.777636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.362 [2024-08-11 12:58:55.777662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:04.362 [2024-08-11 12:58:55.777673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:04.362 [2024-08-11 12:58:55.777686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:04.362 [2024-08-11 12:58:55.777697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:04.362 [2024-08-11 12:58:55.777709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:04.362 [2024-08-11 12:58:55.777721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:04.362 [2024-08-11 12:58:55.777737] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:04.362 [2024-08-11 12:58:55.777750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.362 [2024-08-11 12:58:55.777764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:04.362 [2024-08-11 12:58:55.777776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:04.362 [2024-08-11 12:58:55.777789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:04.362 [2024-08-11 12:58:55.777801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:04.362 [2024-08-11 12:58:55.777814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:04.362 [2024-08-11 12:58:55.777825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:04.362 [2024-08-11 12:58:55.777838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:04.362 [2024-08-11 12:58:55.777849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:04.362 [2024-08-11 12:58:55.777863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:04.362 [2024-08-11 12:58:55.777874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:04.362 [2024-08-11 12:58:55.777918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:04.362 [2024-08-11 12:58:55.777930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:04.362 [2024-08-11 12:58:55.777944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:04.362 [2024-08-11 12:58:55.777957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:04.362 [2024-08-11 12:58:55.777984] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:04.362 [2024-08-11 12:58:55.777999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.362 [2024-08-11 12:58:55.778025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:04.362 [2024-08-11 12:58:55.778051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:04.362 [2024-08-11 12:58:55.778098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:04.362 [2024-08-11 12:58:55.778110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:04.362 [2024-08-11 12:58:55.778125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.778138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:04.363 [2024-08-11 12:58:55.778151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:17:04.363 [2024-08-11 12:58:55.778163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.786687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.786750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.363 [2024-08-11 12:58:55.786788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.438 ms 00:17:04.363 [2024-08-11 12:58:55.786799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.787046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.787067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:04.363 [2024-08-11 12:58:55.787085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:04.363 [2024-08-11 12:58:55.787106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.794841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.795102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.363 [2024-08-11 12:58:55.795135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.705 ms 00:17:04.363 [2024-08-11 12:58:55.795158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.795232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.795250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.363 [2024-08-11 12:58:55.795266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:04.363 [2024-08-11 12:58:55.795277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.795640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.795659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.363 [2024-08-11 12:58:55.795673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:17:04.363 [2024-08-11 12:58:55.795684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.795825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.795842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.363 [2024-08-11 12:58:55.795908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:04.363 [2024-08-11 12:58:55.795938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.801678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.801718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.363 [2024-08-11 12:58:55.801752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.708 ms 00:17:04.363 [2024-08-11 12:58:55.801764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.804141] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:04.363 [2024-08-11 12:58:55.804195] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:04.363 [2024-08-11 12:58:55.804232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.804244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:04.363 [2024-08-11 12:58:55.804257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:17:04.363 [2024-08-11 12:58:55.804282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.818089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.818165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:04.363 [2024-08-11 12:58:55.818203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.739 ms 00:17:04.363 [2024-08-11 12:58:55.818216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.820261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.820451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:04.363 [2024-08-11 12:58:55.820481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:17:04.363 [2024-08-11 12:58:55.820493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.822223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.822260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:04.363 [2024-08-11 12:58:55.822293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:17:04.363 [2024-08-11 12:58:55.822318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.822675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.822701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:04.363 [2024-08-11 12:58:55.822717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:04.363 [2024-08-11 12:58:55.822735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.849113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.849189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:04.363 [2024-08-11 12:58:55.849228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.342 ms 00:17:04.363 [2024-08-11 12:58:55.849240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.857123] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:04.363 [2024-08-11 12:58:55.869213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.869300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:04.363 [2024-08-11 12:58:55.869322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.837 ms 00:17:04.363 [2024-08-11 12:58:55.869335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.869488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.869510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:04.363 [2024-08-11 12:58:55.869522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:04.363 [2024-08-11 12:58:55.869534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.869593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.869614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:04.363 [2024-08-11 12:58:55.869626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:04.363 [2024-08-11 12:58:55.869642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.869671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.869687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:04.363 [2024-08-11 12:58:55.869698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:04.363 [2024-08-11 12:58:55.869713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.869752] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:04.363 [2024-08-11 12:58:55.869769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.869780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:04.363 [2024-08-11 12:58:55.869792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:04.363 [2024-08-11 12:58:55.869812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.873755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.873977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:04.363 [2024-08-11 12:58:55.874155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.906 ms 00:17:04.363 [2024-08-11 12:58:55.874296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.874501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.363 [2024-08-11 12:58:55.874536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:04.363 [2024-08-11 12:58:55.874570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:04.363 [2024-08-11 12:58:55.874607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.363 [2024-08-11 12:58:55.875734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:04.363 [2024-08-11 12:58:55.877049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.347 ms, result 0 00:17:04.363 [2024-08-11 12:58:55.878115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:04.363 Some configs were skipped because the RPC state that can call them passed over. 00:17:04.363 12:58:55 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:04.622 [2024-08-11 12:58:56.132788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.622 [2024-08-11 12:58:56.133165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:04.622 [2024-08-11 12:58:56.133320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:17:04.622 [2024-08-11 12:58:56.133395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.622 [2024-08-11 12:58:56.133549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.190 ms, result 0 00:17:04.622 true 00:17:04.622 12:58:56 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:04.882 [2024-08-11 12:58:56.361054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.882 [2024-08-11 12:58:56.361330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:04.882 [2024-08-11 12:58:56.361457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:17:04.882 [2024-08-11 12:58:56.361505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.882 [2024-08-11 12:58:56.361588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.951 ms, result 0 00:17:04.882 true 00:17:04.882 12:58:56 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 84878 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 84878 ']' 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 84878 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84878 00:17:04.882 killing process with pid 84878 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84878' 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 84878 00:17:04.882 12:58:56 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 84878 00:17:05.143 [2024-08-11 12:58:56.520387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.520474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:05.143 [2024-08-11 12:58:56.520497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:05.143 [2024-08-11 12:58:56.520525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.520555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:05.143 [2024-08-11 12:58:56.520992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.521009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:05.143 [2024-08-11 12:58:56.521022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:17:05.143 [2024-08-11 12:58:56.521032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.521295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.521311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:05.143 [2024-08-11 12:58:56.521325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:17:05.143 [2024-08-11 12:58:56.521336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.524925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.524963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:05.143 [2024-08-11 12:58:56.524999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.547 ms 00:17:05.143 [2024-08-11 12:58:56.525011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.532323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.532360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:05.143 [2024-08-11 12:58:56.532393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.264 ms 00:17:05.143 [2024-08-11 12:58:56.532404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.533806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.534047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:05.143 [2024-08-11 12:58:56.534081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:17:05.143 [2024-08-11 12:58:56.534095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.537551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.537754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:05.143 [2024-08-11 12:58:56.537784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:17:05.143 [2024-08-11 12:58:56.537798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.537970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.537990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:05.143 [2024-08-11 12:58:56.538005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:17:05.143 [2024-08-11 12:58:56.538016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.539734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.539770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:05.143 [2024-08-11 12:58:56.539815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.684 ms 00:17:05.143 [2024-08-11 12:58:56.539825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.541398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.541431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:05.143 [2024-08-11 12:58:56.541464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:17:05.143 [2024-08-11 12:58:56.541474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.542706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.542741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:05.143 [2024-08-11 12:58:56.542782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:17:05.143 [2024-08-11 12:58:56.542793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.544090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-08-11 12:58:56.544301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:05.143 [2024-08-11 12:58:56.544346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:17:05.143 [2024-08-11 12:58:56.544358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-08-11 12:58:56.544408] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:05.143 [2024-08-11 12:58:56.544430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:05.143 [2024-08-11 12:58:56.544940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.544953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.544966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.544977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.544990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:05.144 [2024-08-11 12:58:56.545789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:05.144 [2024-08-11 12:58:56.545810] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72816ad-f32f-44ac-983e-8aec6b187201 00:17:05.144 [2024-08-11 12:58:56.545822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:05.144 [2024-08-11 12:58:56.545835] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:05.144 [2024-08-11 12:58:56.545845] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:05.144 [2024-08-11 12:58:56.545858] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:05.144 [2024-08-11 12:58:56.545869] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:05.144 [2024-08-11 12:58:56.545896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:05.144 [2024-08-11 12:58:56.545916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:05.144 [2024-08-11 12:58:56.545941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:05.144 [2024-08-11 12:58:56.545954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:05.144 [2024-08-11 12:58:56.545968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-08-11 12:58:56.545980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:05.144 [2024-08-11 12:58:56.545998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:17:05.144 [2024-08-11 12:58:56.546009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-08-11 12:58:56.547401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-08-11 12:58:56.547554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:05.144 [2024-08-11 12:58:56.547583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:17:05.144 [2024-08-11 12:58:56.547595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-08-11 12:58:56.547691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-08-11 12:58:56.547709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:05.144 [2024-08-11 12:58:56.547723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:05.144 [2024-08-11 12:58:56.547736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-08-11 12:58:56.553330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.144 [2024-08-11 12:58:56.553369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.144 [2024-08-11 12:58:56.553403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.144 [2024-08-11 12:58:56.553413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-08-11 12:58:56.553512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.144 [2024-08-11 12:58:56.553530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.144 [2024-08-11 12:58:56.553543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.144 [2024-08-11 12:58:56.553554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-08-11 12:58:56.553615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.144 [2024-08-11 12:58:56.553633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.144 [2024-08-11 12:58:56.553646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.144 [2024-08-11 12:58:56.553656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.553682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.553695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.145 [2024-08-11 12:58:56.553710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.553721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.561607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.561843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.145 [2024-08-11 12:58:56.561964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.561980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.568653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.568833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.145 [2024-08-11 12:58:56.569007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.569055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.569216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.569306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.145 [2024-08-11 12:58:56.569417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.569485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.569758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.569820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.145 [2024-08-11 12:58:56.570044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.570103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.570252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.570357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.145 [2024-08-11 12:58:56.570469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.570537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.570747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.570806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:05.145 [2024-08-11 12:58:56.570948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.571009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.571094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.571170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.145 [2024-08-11 12:58:56.571216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.571327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.571428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.145 [2024-08-11 12:58:56.571483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.145 [2024-08-11 12:58:56.571589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.145 [2024-08-11 12:58:56.571641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.145 [2024-08-11 12:58:56.571982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.543 ms, result 0 00:17:05.404 12:58:56 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:05.404 12:58:56 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:05.404 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:05.404 [2024-08-11 12:58:56.848231] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:05.404 [2024-08-11 12:58:56.848530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84911 ] 00:17:05.404 [2024-08-11 12:58:56.987008] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.663 [2024-08-11 12:58:57.020057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.663 [2024-08-11 12:58:57.099674] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.663 [2024-08-11 12:58:57.099760] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.663 [2024-08-11 12:58:57.253744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.663 [2024-08-11 12:58:57.253793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.663 [2024-08-11 12:58:57.253827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:05.663 [2024-08-11 12:58:57.253836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.663 [2024-08-11 12:58:57.256354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.663 [2024-08-11 12:58:57.256395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.663 [2024-08-11 12:58:57.256434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:17:05.663 [2024-08-11 12:58:57.256458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.663 [2024-08-11 12:58:57.256624] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.663 [2024-08-11 12:58:57.256966] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.663 [2024-08-11 12:58:57.256992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.663 [2024-08-11 12:58:57.257003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.663 [2024-08-11 12:58:57.257015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:17:05.663 [2024-08-11 12:58:57.257024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.663 [2024-08-11 12:58:57.258312] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:05.923 [2024-08-11 12:58:57.260658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.923 [2024-08-11 12:58:57.260697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:05.923 [2024-08-11 12:58:57.260740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.352 ms 00:17:05.923 [2024-08-11 12:58:57.260750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.923 [2024-08-11 12:58:57.260843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.923 [2024-08-11 12:58:57.260861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:05.923 [2024-08-11 12:58:57.260922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:05.923 [2024-08-11 12:58:57.260934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.923 [2024-08-11 12:58:57.265460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.923 [2024-08-11 12:58:57.265657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.923 [2024-08-11 12:58:57.265693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.472 ms 00:17:05.923 [2024-08-11 12:58:57.265716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.923 [2024-08-11 12:58:57.265865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.924 [2024-08-11 12:58:57.265928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.924 [2024-08-11 12:58:57.265947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:05.924 [2024-08-11 12:58:57.265959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.924 [2024-08-11 12:58:57.266011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.924 [2024-08-11 12:58:57.266025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.924 [2024-08-11 12:58:57.266036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:05.924 [2024-08-11 12:58:57.266046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.924 [2024-08-11 12:58:57.266072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:05.924 [2024-08-11 12:58:57.267361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.924 [2024-08-11 12:58:57.267402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.924 [2024-08-11 12:58:57.267440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:17:05.924 [2024-08-11 12:58:57.267450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.924 [2024-08-11 12:58:57.267495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.924 [2024-08-11 12:58:57.267510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.924 [2024-08-11 12:58:57.267535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:05.924 [2024-08-11 12:58:57.267544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.924 [2024-08-11 12:58:57.267578] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:05.924 [2024-08-11 12:58:57.267603] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:05.924 [2024-08-11 12:58:57.267645] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:05.924 [2024-08-11 12:58:57.267666] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:05.924 [2024-08-11 12:58:57.267754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:05.924 [2024-08-11 12:58:57.267768] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.924 [2024-08-11 12:58:57.267780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:05.924 [2024-08-11 12:58:57.267800] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.924 [2024-08-11 12:58:57.267815] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.924 [2024-08-11 12:58:57.267826] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:05.924 [2024-08-11 12:58:57.267835] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.924 [2024-08-11 12:58:57.267844] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:05.924 [2024-08-11 12:58:57.267919] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:05.924 [2024-08-11 12:58:57.267938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.924 [2024-08-11 12:58:57.267949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.924 [2024-08-11 12:58:57.267960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:17:05.924 [2024-08-11 12:58:57.267974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.924 [2024-08-11 12:58:57.268064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.924 [2024-08-11 12:58:57.268101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.924 [2024-08-11 12:58:57.268112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:05.924 [2024-08-11 12:58:57.268122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.924 [2024-08-11 12:58:57.268258] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.924 [2024-08-11 12:58:57.268281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.924 [2024-08-11 12:58:57.268291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.924 [2024-08-11 12:58:57.268319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.924 [2024-08-11 12:58:57.268346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.924 [2024-08-11 12:58:57.268365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.924 [2024-08-11 12:58:57.268375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:05.924 [2024-08-11 12:58:57.268383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.924 [2024-08-11 12:58:57.268392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.924 [2024-08-11 12:58:57.268400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:05.924 [2024-08-11 12:58:57.268409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.924 [2024-08-11 12:58:57.268427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.924 [2024-08-11 12:58:57.268453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.924 [2024-08-11 12:58:57.268478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.924 [2024-08-11 12:58:57.268508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.924 [2024-08-11 12:58:57.268534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.924 [2024-08-11 12:58:57.268559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.924 [2024-08-11 12:58:57.268576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.924 [2024-08-11 12:58:57.268584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:05.924 [2024-08-11 12:58:57.268592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.924 [2024-08-11 12:58:57.268601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:05.924 [2024-08-11 12:58:57.268609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:05.924 [2024-08-11 12:58:57.268617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:05.924 [2024-08-11 12:58:57.268634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:05.924 [2024-08-11 12:58:57.268645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268654] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.924 [2024-08-11 12:58:57.268664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.924 [2024-08-11 12:58:57.268673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.924 [2024-08-11 12:58:57.268702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.924 [2024-08-11 12:58:57.268713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.924 [2024-08-11 12:58:57.268721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.924 [2024-08-11 12:58:57.268730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.924 [2024-08-11 12:58:57.268738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.924 [2024-08-11 12:58:57.268747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.924 [2024-08-11 12:58:57.268757] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.924 [2024-08-11 12:58:57.268777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.924 [2024-08-11 12:58:57.268787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:05.924 [2024-08-11 12:58:57.268796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:05.924 [2024-08-11 12:58:57.268805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:05.924 [2024-08-11 12:58:57.268817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:05.924 [2024-08-11 12:58:57.268828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:05.924 [2024-08-11 12:58:57.268837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:05.924 [2024-08-11 12:58:57.268846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:05.924 [2024-08-11 12:58:57.268855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:05.924 [2024-08-11 12:58:57.268865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:05.924 [2024-08-11 12:58:57.268899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:05.924 [2024-08-11 12:58:57.268909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:05.925 [2024-08-11 12:58:57.268918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:05.925 [2024-08-11 12:58:57.268928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:05.925 [2024-08-11 12:58:57.269230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:05.925 [2024-08-11 12:58:57.269317] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.925 [2024-08-11 12:58:57.269368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.925 [2024-08-11 12:58:57.269474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.925 [2024-08-11 12:58:57.269533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.925 [2024-08-11 12:58:57.269580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.925 [2024-08-11 12:58:57.269694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.925 [2024-08-11 12:58:57.269831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.269944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.925 [2024-08-11 12:58:57.269990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:17:05.925 [2024-08-11 12:58:57.270145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.291903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.292205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.925 [2024-08-11 12:58:57.292350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.614 ms 00:17:05.925 [2024-08-11 12:58:57.292502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.292770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.292850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.925 [2024-08-11 12:58:57.293059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:05.925 [2024-08-11 12:58:57.293122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.301617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.301655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.925 [2024-08-11 12:58:57.301686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.419 ms 00:17:05.925 [2024-08-11 12:58:57.301696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.301753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.301769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.925 [2024-08-11 12:58:57.301780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:05.925 [2024-08-11 12:58:57.301790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.302159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.302182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.925 [2024-08-11 12:58:57.302194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:17:05.925 [2024-08-11 12:58:57.302204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.302382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.302411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.925 [2024-08-11 12:58:57.302423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:05.925 [2024-08-11 12:58:57.302433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.307073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.307111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.925 [2024-08-11 12:58:57.307154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.612 ms 00:17:05.925 [2024-08-11 12:58:57.307164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.309480] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:05.925 [2024-08-11 12:58:57.309536] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:05.925 [2024-08-11 12:58:57.309553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.309564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:05.925 [2024-08-11 12:58:57.309574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:17:05.925 [2024-08-11 12:58:57.309583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.322511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.322575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:05.925 [2024-08-11 12:58:57.322593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.879 ms 00:17:05.925 [2024-08-11 12:58:57.322603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.324560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.324732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:05.925 [2024-08-11 12:58:57.324756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:17:05.925 [2024-08-11 12:58:57.324766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.326378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.326407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:05.925 [2024-08-11 12:58:57.326436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.562 ms 00:17:05.925 [2024-08-11 12:58:57.326445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.326781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.326800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:05.925 [2024-08-11 12:58:57.326811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:05.925 [2024-08-11 12:58:57.326821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.341771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.341848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:05.925 [2024-08-11 12:58:57.341884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.909 ms 00:17:05.925 [2024-08-11 12:58:57.341918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.349047] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:05.925 [2024-08-11 12:58:57.361122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.361190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.925 [2024-08-11 12:58:57.361230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.086 ms 00:17:05.925 [2024-08-11 12:58:57.361240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.361409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.361427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:05.925 [2024-08-11 12:58:57.361439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:05.925 [2024-08-11 12:58:57.361460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.361520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.361534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.925 [2024-08-11 12:58:57.361554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:05.925 [2024-08-11 12:58:57.361568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.361614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.361629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.925 [2024-08-11 12:58:57.361639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:05.925 [2024-08-11 12:58:57.361648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.361684] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:05.925 [2024-08-11 12:58:57.361697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.361707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:05.925 [2024-08-11 12:58:57.361716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:05.925 [2024-08-11 12:58:57.361734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.365234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.365270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.925 [2024-08-11 12:58:57.365300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:17:05.925 [2024-08-11 12:58:57.365323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.365413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.925 [2024-08-11 12:58:57.365441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.925 [2024-08-11 12:58:57.365452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:05.925 [2024-08-11 12:58:57.365462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.925 [2024-08-11 12:58:57.366536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.925 [2024-08-11 12:58:57.367708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.399 ms, result 0 00:17:05.925 [2024-08-11 12:58:57.368476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:05.925 [2024-08-11 12:58:57.377822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.041  Copying: 25/256 [MB] (25 MBps) Copying: 46/256 [MB] (21 MBps) Copying: 67/256 [MB] (20 MBps) Copying: 87/256 [MB] (20 MBps) Copying: 108/256 [MB] (20 MBps) Copying: 130/256 [MB] (21 MBps) Copying: 152/256 [MB] (21 MBps) Copying: 173/256 [MB] (21 MBps) Copying: 194/256 [MB] (21 MBps) Copying: 215/256 [MB] (21 MBps) Copying: 236/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-08-11 12:59:09.296006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.041 [2024-08-11 12:59:09.297298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.297470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:18.041 [2024-08-11 12:59:09.297627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:18.041 [2024-08-11 12:59:09.297677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.297737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:18.041 [2024-08-11 12:59:09.298401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.298563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:18.041 [2024-08-11 12:59:09.298683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:18.041 [2024-08-11 12:59:09.298808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.299157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.299302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:18.041 [2024-08-11 12:59:09.299423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:18.041 [2024-08-11 12:59:09.299544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.303058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.303252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:18.041 [2024-08-11 12:59:09.303386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.451 ms 00:17:18.041 [2024-08-11 12:59:09.303502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.310348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.310509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:18.041 [2024-08-11 12:59:09.310658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.751 ms 00:17:18.041 [2024-08-11 12:59:09.310793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.312239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.312432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:18.041 [2024-08-11 12:59:09.312593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:17:18.041 [2024-08-11 12:59:09.312721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.316098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.316180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:18.041 [2024-08-11 12:59:09.316347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.318 ms 00:17:18.041 [2024-08-11 12:59:09.316401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.316564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.316627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:18.041 [2024-08-11 12:59:09.316670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:18.041 [2024-08-11 12:59:09.316741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.318699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.318902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:18.041 [2024-08-11 12:59:09.319009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:17:18.041 [2024-08-11 12:59:09.319130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.320665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.320856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:18.041 [2024-08-11 12:59:09.321010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:17:18.041 [2024-08-11 12:59:09.321056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.322400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.322597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:18.041 [2024-08-11 12:59:09.322701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:17:18.041 [2024-08-11 12:59:09.322744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.324015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.041 [2024-08-11 12:59:09.324159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.041 [2024-08-11 12:59:09.324296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:17:18.041 [2024-08-11 12:59:09.324437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.041 [2024-08-11 12:59:09.324519] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.041 [2024-08-11 12:59:09.324691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.324759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.324809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.324939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.324993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.325169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.325316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.325517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.325578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.325754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.325813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.325969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.041 [2024-08-11 12:59:09.326789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.326974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.042 [2024-08-11 12:59:09.327548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.042 [2024-08-11 12:59:09.327558] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72816ad-f32f-44ac-983e-8aec6b187201 00:17:18.042 [2024-08-11 12:59:09.327573] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.042 [2024-08-11 12:59:09.327583] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.042 [2024-08-11 12:59:09.327593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.042 [2024-08-11 12:59:09.327606] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.042 [2024-08-11 12:59:09.327615] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.042 [2024-08-11 12:59:09.327625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.042 [2024-08-11 12:59:09.327634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.042 [2024-08-11 12:59:09.327643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.042 [2024-08-11 12:59:09.327651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.042 [2024-08-11 12:59:09.327661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.042 [2024-08-11 12:59:09.327672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.042 [2024-08-11 12:59:09.327683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:17:18.042 [2024-08-11 12:59:09.327692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.042 [2024-08-11 12:59:09.329167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.042 [2024-08-11 12:59:09.329194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.042 [2024-08-11 12:59:09.329206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:17:18.042 [2024-08-11 12:59:09.329217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.042 [2024-08-11 12:59:09.329293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.042 [2024-08-11 12:59:09.329308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.042 [2024-08-11 12:59:09.329319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:18.042 [2024-08-11 12:59:09.329328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.042 [2024-08-11 12:59:09.333830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.042 [2024-08-11 12:59:09.333898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.042 [2024-08-11 12:59:09.333929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.042 [2024-08-11 12:59:09.333951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.042 [2024-08-11 12:59:09.334022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.042 [2024-08-11 12:59:09.334037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.042 [2024-08-11 12:59:09.334048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.042 [2024-08-11 12:59:09.334057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.042 [2024-08-11 12:59:09.334122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.042 [2024-08-11 12:59:09.334140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.042 [2024-08-11 12:59:09.334151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.042 [2024-08-11 12:59:09.334160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.334182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.334194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.043 [2024-08-11 12:59:09.334204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.334214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.341832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.341916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.043 [2024-08-11 12:59:09.341948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.341958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.348431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.348658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.043 [2024-08-11 12:59:09.348684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.348696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.348786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.348801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.043 [2024-08-11 12:59:09.348812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.348821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.348853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.348865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.043 [2024-08-11 12:59:09.348876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.349168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.349294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.349347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.043 [2024-08-11 12:59:09.349359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.349377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.349424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.349449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.043 [2024-08-11 12:59:09.349460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.349469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.349537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.349551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.043 [2024-08-11 12:59:09.349566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.349576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.349664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.043 [2024-08-11 12:59:09.349679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.043 [2024-08-11 12:59:09.349688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.043 [2024-08-11 12:59:09.349698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.043 [2024-08-11 12:59:09.349846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.513 ms, result 0 00:17:18.043 00:17:18.043 00:17:18.043 12:59:09 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:18.043 12:59:09 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:18.610 12:59:10 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.869 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:18.869 [2024-08-11 12:59:10.245093] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:18.869 [2024-08-11 12:59:10.245270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85050 ] 00:17:18.869 [2024-08-11 12:59:10.390935] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.869 [2024-08-11 12:59:10.426034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.128 [2024-08-11 12:59:10.510747] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.129 [2024-08-11 12:59:10.510845] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.129 [2024-08-11 12:59:10.666827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.666923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:19.129 [2024-08-11 12:59:10.666944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:19.129 [2024-08-11 12:59:10.666954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.669439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.669479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.129 [2024-08-11 12:59:10.669509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.457 ms 00:17:19.129 [2024-08-11 12:59:10.669530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.669646] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:19.129 [2024-08-11 12:59:10.669922] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:19.129 [2024-08-11 12:59:10.669950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.669962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.129 [2024-08-11 12:59:10.669973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:19.129 [2024-08-11 12:59:10.669983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.671270] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:19.129 [2024-08-11 12:59:10.673521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.673560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:19.129 [2024-08-11 12:59:10.673591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:17:19.129 [2024-08-11 12:59:10.673601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.673681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.673701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:19.129 [2024-08-11 12:59:10.673716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:19.129 [2024-08-11 12:59:10.673726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.678169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.678208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.129 [2024-08-11 12:59:10.678248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.394 ms 00:17:19.129 [2024-08-11 12:59:10.678298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.678426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.678451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.129 [2024-08-11 12:59:10.678474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:19.129 [2024-08-11 12:59:10.678484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.678520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.678534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:19.129 [2024-08-11 12:59:10.678545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:19.129 [2024-08-11 12:59:10.678564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.678591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:19.129 [2024-08-11 12:59:10.680041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.680281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.129 [2024-08-11 12:59:10.680320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:17:19.129 [2024-08-11 12:59:10.680332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.680384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.680402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.129 [2024-08-11 12:59:10.680413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:19.129 [2024-08-11 12:59:10.680422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.680450] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:19.129 [2024-08-11 12:59:10.680478] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:19.129 [2024-08-11 12:59:10.680524] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:19.129 [2024-08-11 12:59:10.680551] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:19.129 [2024-08-11 12:59:10.680667] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:19.129 [2024-08-11 12:59:10.680688] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.129 [2024-08-11 12:59:10.680701] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:19.129 [2024-08-11 12:59:10.680714] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.129 [2024-08-11 12:59:10.680729] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.129 [2024-08-11 12:59:10.680754] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:19.129 [2024-08-11 12:59:10.680780] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.129 [2024-08-11 12:59:10.680796] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:19.129 [2024-08-11 12:59:10.680805] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:19.129 [2024-08-11 12:59:10.680815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.680824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.129 [2024-08-11 12:59:10.680834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:17:19.129 [2024-08-11 12:59:10.680856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.680994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.129 [2024-08-11 12:59:10.681017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.129 [2024-08-11 12:59:10.681028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:19.129 [2024-08-11 12:59:10.681038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.129 [2024-08-11 12:59:10.681131] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.129 [2024-08-11 12:59:10.681156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.129 [2024-08-11 12:59:10.681167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.129 [2024-08-11 12:59:10.681177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.129 [2024-08-11 12:59:10.681195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:19.129 [2024-08-11 12:59:10.681228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.129 [2024-08-11 12:59:10.681253] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.129 [2024-08-11 12:59:10.681275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.129 [2024-08-11 12:59:10.681285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:19.129 [2024-08-11 12:59:10.681309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.129 [2024-08-11 12:59:10.681334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.129 [2024-08-11 12:59:10.681343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:19.129 [2024-08-11 12:59:10.681352] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.129 [2024-08-11 12:59:10.681372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:19.129 [2024-08-11 12:59:10.681381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.129 [2024-08-11 12:59:10.681399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.129 [2024-08-11 12:59:10.681416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.129 [2024-08-11 12:59:10.681425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.129 [2024-08-11 12:59:10.681443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.129 [2024-08-11 12:59:10.681457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.129 [2024-08-11 12:59:10.681476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.129 [2024-08-11 12:59:10.681485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.129 [2024-08-11 12:59:10.681502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.129 [2024-08-11 12:59:10.681511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:19.129 [2024-08-11 12:59:10.681520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.129 [2024-08-11 12:59:10.681529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.129 [2024-08-11 12:59:10.681538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:19.129 [2024-08-11 12:59:10.681547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.129 [2024-08-11 12:59:10.681556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:19.129 [2024-08-11 12:59:10.681565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:19.129 [2024-08-11 12:59:10.681574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.130 [2024-08-11 12:59:10.681583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:19.130 [2024-08-11 12:59:10.681591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:19.130 [2024-08-11 12:59:10.681603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.130 [2024-08-11 12:59:10.681612] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.130 [2024-08-11 12:59:10.681622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.130 [2024-08-11 12:59:10.681631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.130 [2024-08-11 12:59:10.681644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.130 [2024-08-11 12:59:10.681654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.130 [2024-08-11 12:59:10.681664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.130 [2024-08-11 12:59:10.681674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.130 [2024-08-11 12:59:10.681683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.130 [2024-08-11 12:59:10.681692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.130 [2024-08-11 12:59:10.681701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.130 [2024-08-11 12:59:10.681711] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.130 [2024-08-11 12:59:10.681723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.130 [2024-08-11 12:59:10.681734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:19.130 [2024-08-11 12:59:10.681744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:19.130 [2024-08-11 12:59:10.681753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:19.130 [2024-08-11 12:59:10.681766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:19.130 [2024-08-11 12:59:10.681777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:19.130 [2024-08-11 12:59:10.681787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:19.130 [2024-08-11 12:59:10.681796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:19.130 [2024-08-11 12:59:10.681806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:19.130 [2024-08-11 12:59:10.681816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:19.130 [2024-08-11 12:59:10.681837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:19.130 [2024-08-11 12:59:10.681847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:19.130 [2024-08-11 12:59:10.681857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:19.130 [2024-08-11 12:59:10.681867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:19.130 [2024-08-11 12:59:10.681877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:19.130 [2024-08-11 12:59:10.681887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.130 [2024-08-11 12:59:10.681899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.130 [2024-08-11 12:59:10.681909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.130 [2024-08-11 12:59:10.681919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.130 [2024-08-11 12:59:10.681944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.130 [2024-08-11 12:59:10.681960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.130 [2024-08-11 12:59:10.681972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.681983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.130 [2024-08-11 12:59:10.682002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:17:19.130 [2024-08-11 12:59:10.682012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.130 [2024-08-11 12:59:10.699567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.699649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.130 [2024-08-11 12:59:10.699673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.474 ms 00:17:19.130 [2024-08-11 12:59:10.699683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.130 [2024-08-11 12:59:10.699930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.699952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.130 [2024-08-11 12:59:10.699964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:19.130 [2024-08-11 12:59:10.699974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.130 [2024-08-11 12:59:10.707435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.707477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.130 [2024-08-11 12:59:10.707507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.430 ms 00:17:19.130 [2024-08-11 12:59:10.707517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.130 [2024-08-11 12:59:10.707589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.707606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.130 [2024-08-11 12:59:10.707617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.130 [2024-08-11 12:59:10.707626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.130 [2024-08-11 12:59:10.708055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.708081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.130 [2024-08-11 12:59:10.708093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:17:19.130 [2024-08-11 12:59:10.708112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.130 [2024-08-11 12:59:10.708305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.708353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.130 [2024-08-11 12:59:10.708381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:19.130 [2024-08-11 12:59:10.708391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.130 [2024-08-11 12:59:10.713236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.713273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.130 [2024-08-11 12:59:10.713309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.818 ms 00:17:19.130 [2024-08-11 12:59:10.713319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.130 [2024-08-11 12:59:10.715784] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:19.130 [2024-08-11 12:59:10.715857] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:19.130 [2024-08-11 12:59:10.715953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.130 [2024-08-11 12:59:10.715967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:19.130 [2024-08-11 12:59:10.715979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.497 ms 00:17:19.130 [2024-08-11 12:59:10.715989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.730706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.730771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:19.391 [2024-08-11 12:59:10.730791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.659 ms 00:17:19.391 [2024-08-11 12:59:10.730802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.732884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.733149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:19.391 [2024-08-11 12:59:10.733178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.915 ms 00:17:19.391 [2024-08-11 12:59:10.733190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.734979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.735050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:19.391 [2024-08-11 12:59:10.735066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:17:19.391 [2024-08-11 12:59:10.735077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.735498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.735523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.391 [2024-08-11 12:59:10.735546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:17:19.391 [2024-08-11 12:59:10.735557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.752705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.753052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:19.391 [2024-08-11 12:59:10.753086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.118 ms 00:17:19.391 [2024-08-11 12:59:10.753110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.761641] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:19.391 [2024-08-11 12:59:10.776007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.776293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.391 [2024-08-11 12:59:10.776374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.776 ms 00:17:19.391 [2024-08-11 12:59:10.776386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.776542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.776561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:19.391 [2024-08-11 12:59:10.776574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:19.391 [2024-08-11 12:59:10.776602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.776683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.776713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.391 [2024-08-11 12:59:10.776725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:19.391 [2024-08-11 12:59:10.776740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.776777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.776794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:19.391 [2024-08-11 12:59:10.776819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:19.391 [2024-08-11 12:59:10.776845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.776938] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:19.391 [2024-08-11 12:59:10.776956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.776968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:19.391 [2024-08-11 12:59:10.776980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:19.391 [2024-08-11 12:59:10.776991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.780938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.780997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:19.391 [2024-08-11 12:59:10.781016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:17:19.391 [2024-08-11 12:59:10.781028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.781140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.781161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:19.391 [2024-08-11 12:59:10.781174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:19.391 [2024-08-11 12:59:10.781185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.782406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.391 [2024-08-11 12:59:10.783678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.191 ms, result 0 00:17:19.391 [2024-08-11 12:59:10.784515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.391 [2024-08-11 12:59:10.793957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.391  Copying: 4096/4096 [kB] (average 21 MBps)[2024-08-11 12:59:10.983216] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.391 [2024-08-11 12:59:10.984471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.391 [2024-08-11 12:59:10.984715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.391 [2024-08-11 12:59:10.984745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.391 [2024-08-11 12:59:10.984758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.391 [2024-08-11 12:59:10.984818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:19.391 [2024-08-11 12:59:10.985386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.392 [2024-08-11 12:59:10.985435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.392 [2024-08-11 12:59:10.985448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:17:19.392 [2024-08-11 12:59:10.985458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.392 [2024-08-11 12:59:10.987260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.392 [2024-08-11 12:59:10.987309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.392 [2024-08-11 12:59:10.987328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.775 ms 00:17:19.392 [2024-08-11 12:59:10.987340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:10.992229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:10.992285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.669 [2024-08-11 12:59:10.992304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.862 ms 00:17:19.669 [2024-08-11 12:59:10.992325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.002911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:11.003099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:19.669 [2024-08-11 12:59:11.003128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.501 ms 00:17:19.669 [2024-08-11 12:59:11.003141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.004474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:11.004527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.669 [2024-08-11 12:59:11.004559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:17:19.669 [2024-08-11 12:59:11.004568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.007847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:11.007944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.669 [2024-08-11 12:59:11.007967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.240 ms 00:17:19.669 [2024-08-11 12:59:11.007980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.008119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:11.008139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.669 [2024-08-11 12:59:11.008153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:19.669 [2024-08-11 12:59:11.008164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.010354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:11.010390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:19.669 [2024-08-11 12:59:11.010419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:17:19.669 [2024-08-11 12:59:11.010429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.011955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:11.011993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:19.669 [2024-08-11 12:59:11.012008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:17:19.669 [2024-08-11 12:59:11.012019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.013414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:11.013603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:19.669 [2024-08-11 12:59:11.013628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:17:19.669 [2024-08-11 12:59:11.013641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.014889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.669 [2024-08-11 12:59:11.014978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:19.669 [2024-08-11 12:59:11.014995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:17:19.669 [2024-08-11 12:59:11.015005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.669 [2024-08-11 12:59:11.015048] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:19.669 [2024-08-11 12:59:11.015071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:19.669 [2024-08-11 12:59:11.015085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.015997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:19.670 [2024-08-11 12:59:11.016217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:19.671 [2024-08-11 12:59:11.016229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:19.671 [2024-08-11 12:59:11.016255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:19.671 [2024-08-11 12:59:11.016280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:19.671 [2024-08-11 12:59:11.016290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:19.671 [2024-08-11 12:59:11.016314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:19.671 [2024-08-11 12:59:11.016324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:19.671 [2024-08-11 12:59:11.016334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:19.671 [2024-08-11 12:59:11.016352] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:19.671 [2024-08-11 12:59:11.016362] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72816ad-f32f-44ac-983e-8aec6b187201 00:17:19.671 [2024-08-11 12:59:11.016378] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:19.671 [2024-08-11 12:59:11.016387] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:19.671 [2024-08-11 12:59:11.016398] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:19.671 [2024-08-11 12:59:11.016421] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:19.671 [2024-08-11 12:59:11.016430] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:19.671 [2024-08-11 12:59:11.016440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:19.671 [2024-08-11 12:59:11.016449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:19.671 [2024-08-11 12:59:11.016458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:19.671 [2024-08-11 12:59:11.016467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:19.671 [2024-08-11 12:59:11.016476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.671 [2024-08-11 12:59:11.016486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:19.671 [2024-08-11 12:59:11.016497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:17:19.671 [2024-08-11 12:59:11.016507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.018370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.671 [2024-08-11 12:59:11.018526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:19.671 [2024-08-11 12:59:11.018669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:17:19.671 [2024-08-11 12:59:11.018718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.018931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.671 [2024-08-11 12:59:11.018993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:19.671 [2024-08-11 12:59:11.019124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:19.671 [2024-08-11 12:59:11.019184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.024231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.024465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.671 [2024-08-11 12:59:11.024618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.024667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.024974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.025115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.671 [2024-08-11 12:59:11.025266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.025414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.025536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.025628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.671 [2024-08-11 12:59:11.025737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.025787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.025885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.025965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.671 [2024-08-11 12:59:11.026013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.026131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.035004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.035261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.671 [2024-08-11 12:59:11.035305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.035317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.042724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.042772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.671 [2024-08-11 12:59:11.042820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.042830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.042917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.042947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.671 [2024-08-11 12:59:11.042961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.042972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.043023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.043037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.671 [2024-08-11 12:59:11.043049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.043060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.043166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.043190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.671 [2024-08-11 12:59:11.043203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.043214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.043326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.043344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:19.671 [2024-08-11 12:59:11.043354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.043373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.043430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.043450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.671 [2024-08-11 12:59:11.043461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.043470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.043520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.671 [2024-08-11 12:59:11.043536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.671 [2024-08-11 12:59:11.043546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.671 [2024-08-11 12:59:11.043555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.671 [2024-08-11 12:59:11.043698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.199 ms, result 0 00:17:19.942 00:17:19.942 00:17:19.942 12:59:11 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85064 00:17:19.942 12:59:11 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85064 00:17:19.942 12:59:11 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 85064 ']' 00:17:19.942 12:59:11 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:19.942 12:59:11 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.942 12:59:11 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.942 12:59:11 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.942 12:59:11 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.942 12:59:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:19.942 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:19.942 [2024-08-11 12:59:11.391265] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:19.942 [2024-08-11 12:59:11.391449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85064 ] 00:17:20.200 [2024-08-11 12:59:11.540758] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.200 [2024-08-11 12:59:11.578929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.766 12:59:12 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:20.766 12:59:12 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:20.766 12:59:12 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:21.023 [2024-08-11 12:59:12.564344] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.023 [2024-08-11 12:59:12.564671] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.283 [2024-08-11 12:59:12.736725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.283 [2024-08-11 12:59:12.737041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:21.283 [2024-08-11 12:59:12.737079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.283 [2024-08-11 12:59:12.737104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.283 [2024-08-11 12:59:12.739583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.283 [2024-08-11 12:59:12.739624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.283 [2024-08-11 12:59:12.739662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:17:21.283 [2024-08-11 12:59:12.739673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.283 [2024-08-11 12:59:12.739806] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:21.283 [2024-08-11 12:59:12.740148] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:21.283 [2024-08-11 12:59:12.740196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.283 [2024-08-11 12:59:12.740224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.283 [2024-08-11 12:59:12.740268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:17:21.283 [2024-08-11 12:59:12.740293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.283 [2024-08-11 12:59:12.741648] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:21.283 [2024-08-11 12:59:12.743928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.283 [2024-08-11 12:59:12.743979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:21.283 [2024-08-11 12:59:12.743998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:17:21.283 [2024-08-11 12:59:12.744013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.283 [2024-08-11 12:59:12.744092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.283 [2024-08-11 12:59:12.744118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:21.283 [2024-08-11 12:59:12.744133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:21.283 [2024-08-11 12:59:12.744149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.283 [2024-08-11 12:59:12.748644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.283 [2024-08-11 12:59:12.748699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.283 [2024-08-11 12:59:12.748722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.404 ms 00:17:21.283 [2024-08-11 12:59:12.748737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.283 [2024-08-11 12:59:12.748886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.283 [2024-08-11 12:59:12.748945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.283 [2024-08-11 12:59:12.748960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:21.283 [2024-08-11 12:59:12.748973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.283 [2024-08-11 12:59:12.749008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.283 [2024-08-11 12:59:12.749027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:21.284 [2024-08-11 12:59:12.749039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:21.284 [2024-08-11 12:59:12.749051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.284 [2024-08-11 12:59:12.749083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:21.284 [2024-08-11 12:59:12.750504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.284 [2024-08-11 12:59:12.750537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.284 [2024-08-11 12:59:12.750574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:17:21.284 [2024-08-11 12:59:12.750586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.284 [2024-08-11 12:59:12.750662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.284 [2024-08-11 12:59:12.750692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:21.284 [2024-08-11 12:59:12.750704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:21.284 [2024-08-11 12:59:12.750715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.284 [2024-08-11 12:59:12.750741] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:21.284 [2024-08-11 12:59:12.750767] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:21.284 [2024-08-11 12:59:12.750809] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:21.284 [2024-08-11 12:59:12.750827] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:21.284 [2024-08-11 12:59:12.750954] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:21.284 [2024-08-11 12:59:12.751223] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:21.284 [2024-08-11 12:59:12.751308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:21.284 [2024-08-11 12:59:12.751532] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:21.284 [2024-08-11 12:59:12.751623] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:21.284 [2024-08-11 12:59:12.751655] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:21.284 [2024-08-11 12:59:12.751672] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:21.284 [2024-08-11 12:59:12.751682] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:21.284 [2024-08-11 12:59:12.751694] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:21.284 [2024-08-11 12:59:12.751707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.284 [2024-08-11 12:59:12.751721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:21.284 [2024-08-11 12:59:12.751733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:17:21.284 [2024-08-11 12:59:12.751753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.284 [2024-08-11 12:59:12.751913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.284 [2024-08-11 12:59:12.751951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:21.284 [2024-08-11 12:59:12.751968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:21.284 [2024-08-11 12:59:12.751995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.284 [2024-08-11 12:59:12.752118] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:21.284 [2024-08-11 12:59:12.752142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:21.284 [2024-08-11 12:59:12.752156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:21.284 [2024-08-11 12:59:12.752235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:21.284 [2024-08-11 12:59:12.752298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.284 [2024-08-11 12:59:12.752319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:21.284 [2024-08-11 12:59:12.752330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:21.284 [2024-08-11 12:59:12.752340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.284 [2024-08-11 12:59:12.752351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:21.284 [2024-08-11 12:59:12.752361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:21.284 [2024-08-11 12:59:12.752372] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:21.284 [2024-08-11 12:59:12.752407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752432] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:21.284 [2024-08-11 12:59:12.752453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:21.284 [2024-08-11 12:59:12.752487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:21.284 [2024-08-11 12:59:12.752517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:21.284 [2024-08-11 12:59:12.752548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:21.284 [2024-08-11 12:59:12.752577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.284 [2024-08-11 12:59:12.752597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:21.284 [2024-08-11 12:59:12.752608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:21.284 [2024-08-11 12:59:12.752618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.284 [2024-08-11 12:59:12.752630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:21.284 [2024-08-11 12:59:12.752640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:21.284 [2024-08-11 12:59:12.752651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:21.284 [2024-08-11 12:59:12.752672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:21.284 [2024-08-11 12:59:12.752680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752691] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:21.284 [2024-08-11 12:59:12.752701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:21.284 [2024-08-11 12:59:12.752712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.284 [2024-08-11 12:59:12.752737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:21.284 [2024-08-11 12:59:12.752747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:21.284 [2024-08-11 12:59:12.752758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:21.284 [2024-08-11 12:59:12.752767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:21.284 [2024-08-11 12:59:12.752778] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:21.284 [2024-08-11 12:59:12.752787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:21.284 [2024-08-11 12:59:12.752801] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:21.284 [2024-08-11 12:59:12.752814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.284 [2024-08-11 12:59:12.752827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:21.284 [2024-08-11 12:59:12.752838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:21.284 [2024-08-11 12:59:12.752849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:21.284 [2024-08-11 12:59:12.752860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:21.284 [2024-08-11 12:59:12.752871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:21.284 [2024-08-11 12:59:12.752881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:21.284 [2024-08-11 12:59:12.752893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:21.284 [2024-08-11 12:59:12.752903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:21.284 [2024-08-11 12:59:12.752915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:21.285 [2024-08-11 12:59:12.752924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:21.285 [2024-08-11 12:59:12.752936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:21.285 [2024-08-11 12:59:12.752947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:21.285 [2024-08-11 12:59:12.752987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:21.285 [2024-08-11 12:59:12.753000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:21.285 [2024-08-11 12:59:12.753014] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:21.285 [2024-08-11 12:59:12.753026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.285 [2024-08-11 12:59:12.753039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:21.285 [2024-08-11 12:59:12.753049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:21.285 [2024-08-11 12:59:12.753074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:21.285 [2024-08-11 12:59:12.753085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:21.285 [2024-08-11 12:59:12.753099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.753126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:21.285 [2024-08-11 12:59:12.753140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:17:21.285 [2024-08-11 12:59:12.753150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.761657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.761717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.285 [2024-08-11 12:59:12.761753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.413 ms 00:17:21.285 [2024-08-11 12:59:12.761764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.761970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.762004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:21.285 [2024-08-11 12:59:12.762022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:21.285 [2024-08-11 12:59:12.762033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.770385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.770432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.285 [2024-08-11 12:59:12.770466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.322 ms 00:17:21.285 [2024-08-11 12:59:12.770478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.770567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.770583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.285 [2024-08-11 12:59:12.770597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:21.285 [2024-08-11 12:59:12.770607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.771016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.771034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.285 [2024-08-11 12:59:12.771053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:17:21.285 [2024-08-11 12:59:12.771064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.771207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.771245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.285 [2024-08-11 12:59:12.771275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:21.285 [2024-08-11 12:59:12.771286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.777249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.777286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.285 [2024-08-11 12:59:12.777323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.902 ms 00:17:21.285 [2024-08-11 12:59:12.777334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.779819] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:21.285 [2024-08-11 12:59:12.779883] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:21.285 [2024-08-11 12:59:12.779926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.779940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:21.285 [2024-08-11 12:59:12.779955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.451 ms 00:17:21.285 [2024-08-11 12:59:12.779968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.794864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.794966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:21.285 [2024-08-11 12:59:12.795009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.835 ms 00:17:21.285 [2024-08-11 12:59:12.795022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.797041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.797080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:21.285 [2024-08-11 12:59:12.797115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:17:21.285 [2024-08-11 12:59:12.797127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.798924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.798988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:21.285 [2024-08-11 12:59:12.799009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:17:21.285 [2024-08-11 12:59:12.799020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.799463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.799495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:21.285 [2024-08-11 12:59:12.799512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:17:21.285 [2024-08-11 12:59:12.799522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.828368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.828691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:21.285 [2024-08-11 12:59:12.828731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.810 ms 00:17:21.285 [2024-08-11 12:59:12.828745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.836955] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:21.285 [2024-08-11 12:59:12.851087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.851178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:21.285 [2024-08-11 12:59:12.851203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.131 ms 00:17:21.285 [2024-08-11 12:59:12.851219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.851410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.851431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:21.285 [2024-08-11 12:59:12.851452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:21.285 [2024-08-11 12:59:12.851465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.851526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.851559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:21.285 [2024-08-11 12:59:12.851570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:21.285 [2024-08-11 12:59:12.851582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.851616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.851631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:21.285 [2024-08-11 12:59:12.851643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:21.285 [2024-08-11 12:59:12.851658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.851697] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:21.285 [2024-08-11 12:59:12.851714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.851725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:21.285 [2024-08-11 12:59:12.851737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:21.285 [2024-08-11 12:59:12.851747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.855658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.855698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:21.285 [2024-08-11 12:59:12.855749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.880 ms 00:17:21.285 [2024-08-11 12:59:12.855760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.855934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.285 [2024-08-11 12:59:12.855958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:21.285 [2024-08-11 12:59:12.855975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:21.285 [2024-08-11 12:59:12.855987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.285 [2024-08-11 12:59:12.857299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:21.285 [2024-08-11 12:59:12.858562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 120.001 ms, result 0 00:17:21.285 [2024-08-11 12:59:12.859773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.544 Some configs were skipped because the RPC state that can call them passed over. 00:17:21.544 12:59:12 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:21.544 [2024-08-11 12:59:13.118678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.544 [2024-08-11 12:59:13.118971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:21.544 [2024-08-11 12:59:13.119113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:17:21.544 [2024-08-11 12:59:13.119311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.544 [2024-08-11 12:59:13.119414] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.163 ms, result 0 00:17:21.544 true 00:17:21.544 12:59:13 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:21.802 [2024-08-11 12:59:13.382950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.802 [2024-08-11 12:59:13.383038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:21.802 [2024-08-11 12:59:13.383076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:17:21.802 [2024-08-11 12:59:13.383103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.802 [2024-08-11 12:59:13.383197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.535 ms, result 0 00:17:21.802 true 00:17:22.062 12:59:13 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85064 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 85064 ']' 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 85064 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85064 00:17:22.062 killing process with pid 85064 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85064' 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 85064 00:17:22.062 12:59:13 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 85064 00:17:22.062 [2024-08-11 12:59:13.541411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.541498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:22.062 [2024-08-11 12:59:13.541518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.062 [2024-08-11 12:59:13.541534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.541565] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:22.062 [2024-08-11 12:59:13.542060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.542078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:22.062 [2024-08-11 12:59:13.542092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:17:22.062 [2024-08-11 12:59:13.542103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.542438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.542469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:22.062 [2024-08-11 12:59:13.542497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:17:22.062 [2024-08-11 12:59:13.542508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.546240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.546298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.062 [2024-08-11 12:59:13.546333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:17:22.062 [2024-08-11 12:59:13.546345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.553211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.553304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:22.062 [2024-08-11 12:59:13.553335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.802 ms 00:17:22.062 [2024-08-11 12:59:13.553346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.554910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.554981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.062 [2024-08-11 12:59:13.555019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:17:22.062 [2024-08-11 12:59:13.555032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.558448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.558484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.062 [2024-08-11 12:59:13.558518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:17:22.062 [2024-08-11 12:59:13.558532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.558653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.558670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.062 [2024-08-11 12:59:13.558683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:22.062 [2024-08-11 12:59:13.558693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.560562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.560758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:22.062 [2024-08-11 12:59:13.560788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:17:22.062 [2024-08-11 12:59:13.560800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.562600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.562635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:22.062 [2024-08-11 12:59:13.562671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.638 ms 00:17:22.062 [2024-08-11 12:59:13.562681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.563956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.563995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.062 [2024-08-11 12:59:13.564014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:17:22.062 [2024-08-11 12:59:13.564026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.565346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.062 [2024-08-11 12:59:13.565379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.062 [2024-08-11 12:59:13.565411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:17:22.062 [2024-08-11 12:59:13.565421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.062 [2024-08-11 12:59:13.565478] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.062 [2024-08-11 12:59:13.565510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.062 [2024-08-11 12:59:13.565702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.565992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.063 [2024-08-11 12:59:13.566943] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.063 [2024-08-11 12:59:13.566958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72816ad-f32f-44ac-983e-8aec6b187201 00:17:22.063 [2024-08-11 12:59:13.566970] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.063 [2024-08-11 12:59:13.566994] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.063 [2024-08-11 12:59:13.567030] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.063 [2024-08-11 12:59:13.567045] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.063 [2024-08-11 12:59:13.567057] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.064 [2024-08-11 12:59:13.567086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.064 [2024-08-11 12:59:13.567098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.064 [2024-08-11 12:59:13.567111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.064 [2024-08-11 12:59:13.567122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.064 [2024-08-11 12:59:13.567136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.064 [2024-08-11 12:59:13.567148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.064 [2024-08-11 12:59:13.567163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:17:22.064 [2024-08-11 12:59:13.567179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.568762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.064 [2024-08-11 12:59:13.568803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.064 [2024-08-11 12:59:13.568818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:17:22.064 [2024-08-11 12:59:13.568828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.568952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.064 [2024-08-11 12:59:13.568968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.064 [2024-08-11 12:59:13.568982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:22.064 [2024-08-11 12:59:13.568993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.574739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.574933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.064 [2024-08-11 12:59:13.575079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.575140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.575336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.575428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.064 [2024-08-11 12:59:13.575543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.575600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.575713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.575776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.064 [2024-08-11 12:59:13.575825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.575971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.576145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.576240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.064 [2024-08-11 12:59:13.576474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.576527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.585541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.585803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.064 [2024-08-11 12:59:13.586000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.586068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.593332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.593553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.064 [2024-08-11 12:59:13.593687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.593743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.593866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.593972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.064 [2024-08-11 12:59:13.594024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.594064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.594143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.594202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.064 [2024-08-11 12:59:13.594441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.594496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.594627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.594687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.064 [2024-08-11 12:59:13.594710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.594722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.594787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.594805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:22.064 [2024-08-11 12:59:13.594819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.594830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.595088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.595162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.064 [2024-08-11 12:59:13.595215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.595435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.595518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.064 [2024-08-11 12:59:13.595560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.064 [2024-08-11 12:59:13.595576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.064 [2024-08-11 12:59:13.595603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.064 [2024-08-11 12:59:13.595788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.334 ms, result 0 00:17:22.323 12:59:13 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.323 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:22.323 [2024-08-11 12:59:13.868428] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:22.323 [2024-08-11 12:59:13.868565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85100 ] 00:17:22.581 [2024-08-11 12:59:14.010731] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.581 [2024-08-11 12:59:14.052498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.581 [2024-08-11 12:59:14.138732] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.581 [2024-08-11 12:59:14.138830] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.841 [2024-08-11 12:59:14.295584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.295647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.841 [2024-08-11 12:59:14.295680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:22.841 [2024-08-11 12:59:14.295690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.298338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.298378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.841 [2024-08-11 12:59:14.298409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:17:22.841 [2024-08-11 12:59:14.298429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.298545] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.841 [2024-08-11 12:59:14.298825] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.841 [2024-08-11 12:59:14.298852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.298923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.841 [2024-08-11 12:59:14.298938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:17:22.841 [2024-08-11 12:59:14.298948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.300305] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:22.841 [2024-08-11 12:59:14.302793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.302829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:22.841 [2024-08-11 12:59:14.302859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.511 ms 00:17:22.841 [2024-08-11 12:59:14.302885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.303001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.303021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:22.841 [2024-08-11 12:59:14.303038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:22.841 [2024-08-11 12:59:14.303057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.307738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.307775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.841 [2024-08-11 12:59:14.307815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.625 ms 00:17:22.841 [2024-08-11 12:59:14.307828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.308052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.308077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.841 [2024-08-11 12:59:14.308093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:22.841 [2024-08-11 12:59:14.308105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.308148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.308174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.841 [2024-08-11 12:59:14.308186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:22.841 [2024-08-11 12:59:14.308222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.308258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.841 [2024-08-11 12:59:14.309725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.309757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.841 [2024-08-11 12:59:14.309786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:17:22.841 [2024-08-11 12:59:14.309794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.309841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.309857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.841 [2024-08-11 12:59:14.309884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:22.841 [2024-08-11 12:59:14.309920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.309956] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:22.841 [2024-08-11 12:59:14.309994] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:22.841 [2024-08-11 12:59:14.310041] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:22.841 [2024-08-11 12:59:14.310068] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:22.841 [2024-08-11 12:59:14.310170] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:22.841 [2024-08-11 12:59:14.310206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.841 [2024-08-11 12:59:14.310219] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:22.841 [2024-08-11 12:59:14.310247] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.841 [2024-08-11 12:59:14.310277] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.841 [2024-08-11 12:59:14.310287] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.841 [2024-08-11 12:59:14.310296] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.841 [2024-08-11 12:59:14.310304] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:22.841 [2024-08-11 12:59:14.310313] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:22.841 [2024-08-11 12:59:14.310322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.310331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.841 [2024-08-11 12:59:14.310341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:17:22.841 [2024-08-11 12:59:14.310352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.310451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.841 [2024-08-11 12:59:14.310473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.841 [2024-08-11 12:59:14.310484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:22.841 [2024-08-11 12:59:14.310492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.841 [2024-08-11 12:59:14.310581] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.841 [2024-08-11 12:59:14.310596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.841 [2024-08-11 12:59:14.310606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.841 [2024-08-11 12:59:14.310626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.841 [2024-08-11 12:59:14.310635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.841 [2024-08-11 12:59:14.310643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.841 [2024-08-11 12:59:14.310652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.841 [2024-08-11 12:59:14.310661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.841 [2024-08-11 12:59:14.310670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.841 [2024-08-11 12:59:14.310677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.841 [2024-08-11 12:59:14.310689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.841 [2024-08-11 12:59:14.310698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.841 [2024-08-11 12:59:14.310706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.841 [2024-08-11 12:59:14.310715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.841 [2024-08-11 12:59:14.310724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:22.841 [2024-08-11 12:59:14.310732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.842 [2024-08-11 12:59:14.310740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.842 [2024-08-11 12:59:14.310748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:22.842 [2024-08-11 12:59:14.310756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.842 [2024-08-11 12:59:14.310764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.842 [2024-08-11 12:59:14.310772] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.842 [2024-08-11 12:59:14.310780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.842 [2024-08-11 12:59:14.310788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.842 [2024-08-11 12:59:14.310797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.842 [2024-08-11 12:59:14.310804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.842 [2024-08-11 12:59:14.310812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.842 [2024-08-11 12:59:14.310825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.842 [2024-08-11 12:59:14.310834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.842 [2024-08-11 12:59:14.310842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.842 [2024-08-11 12:59:14.310850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:22.842 [2024-08-11 12:59:14.310857] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.842 [2024-08-11 12:59:14.310865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.842 [2024-08-11 12:59:14.310891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:22.842 [2024-08-11 12:59:14.310899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.842 [2024-08-11 12:59:14.310908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.842 [2024-08-11 12:59:14.310918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:22.842 [2024-08-11 12:59:14.311270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.842 [2024-08-11 12:59:14.311331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:22.842 [2024-08-11 12:59:14.311369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:22.842 [2024-08-11 12:59:14.311403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.842 [2024-08-11 12:59:14.311504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:22.842 [2024-08-11 12:59:14.311549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:22.842 [2024-08-11 12:59:14.311590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.842 [2024-08-11 12:59:14.311625] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.842 [2024-08-11 12:59:14.311724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.842 [2024-08-11 12:59:14.311769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.842 [2024-08-11 12:59:14.311814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.842 [2024-08-11 12:59:14.311848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.842 [2024-08-11 12:59:14.312004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.842 [2024-08-11 12:59:14.312045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.842 [2024-08-11 12:59:14.312081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.842 [2024-08-11 12:59:14.312179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.842 [2024-08-11 12:59:14.312227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.842 [2024-08-11 12:59:14.312346] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.842 [2024-08-11 12:59:14.312422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.842 [2024-08-11 12:59:14.312491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.842 [2024-08-11 12:59:14.312542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:22.842 [2024-08-11 12:59:14.312592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.842 [2024-08-11 12:59:14.312722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:22.842 [2024-08-11 12:59:14.312736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:22.842 [2024-08-11 12:59:14.312747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:22.842 [2024-08-11 12:59:14.312757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:22.842 [2024-08-11 12:59:14.312766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:22.842 [2024-08-11 12:59:14.312777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:22.842 [2024-08-11 12:59:14.312799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:22.842 [2024-08-11 12:59:14.312809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:22.842 [2024-08-11 12:59:14.312819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:22.842 [2024-08-11 12:59:14.312829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:22.842 [2024-08-11 12:59:14.312840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:22.842 [2024-08-11 12:59:14.312850] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.842 [2024-08-11 12:59:14.312861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.842 [2024-08-11 12:59:14.312889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.842 [2024-08-11 12:59:14.312926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.842 [2024-08-11 12:59:14.312942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.842 [2024-08-11 12:59:14.312957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.842 [2024-08-11 12:59:14.312972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.312984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.842 [2024-08-11 12:59:14.312997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:17:22.842 [2024-08-11 12:59:14.313010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.330569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.330795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.842 [2024-08-11 12:59:14.330944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.441 ms 00:17:22.842 [2024-08-11 12:59:14.330997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.331208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.331300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.842 [2024-08-11 12:59:14.331412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:22.842 [2024-08-11 12:59:14.331563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.339053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.339246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.842 [2024-08-11 12:59:14.339402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.413 ms 00:17:22.842 [2024-08-11 12:59:14.339452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.339671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.339728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.842 [2024-08-11 12:59:14.339938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.842 [2024-08-11 12:59:14.340001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.340428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.340599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.842 [2024-08-11 12:59:14.340702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:17:22.842 [2024-08-11 12:59:14.340746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.341123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.341267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.842 [2024-08-11 12:59:14.341394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:22.842 [2024-08-11 12:59:14.341511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.346460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.346651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.842 [2024-08-11 12:59:14.346775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:17:22.842 [2024-08-11 12:59:14.346821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.349148] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:22.842 [2024-08-11 12:59:14.349357] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:22.842 [2024-08-11 12:59:14.349485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.349526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:22.842 [2024-08-11 12:59:14.349561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:17:22.842 [2024-08-11 12:59:14.349593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.363685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.842 [2024-08-11 12:59:14.363909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:22.842 [2024-08-11 12:59:14.364094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.933 ms 00:17:22.842 [2024-08-11 12:59:14.364149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.842 [2024-08-11 12:59:14.366162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.366358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:22.843 [2024-08-11 12:59:14.366473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.765 ms 00:17:22.843 [2024-08-11 12:59:14.366519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.368393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.368560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:22.843 [2024-08-11 12:59:14.368584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:17:22.843 [2024-08-11 12:59:14.368594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.369056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.369108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.843 [2024-08-11 12:59:14.369121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:17:22.843 [2024-08-11 12:59:14.369130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.386567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.386631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:22.843 [2024-08-11 12:59:14.386662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.406 ms 00:17:22.843 [2024-08-11 12:59:14.386677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.395368] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:22.843 [2024-08-11 12:59:14.409696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.409768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.843 [2024-08-11 12:59:14.409791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.915 ms 00:17:22.843 [2024-08-11 12:59:14.409801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.410020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.410056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:22.843 [2024-08-11 12:59:14.410069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:22.843 [2024-08-11 12:59:14.410093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.410170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.410189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.843 [2024-08-11 12:59:14.410200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:22.843 [2024-08-11 12:59:14.410214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.410272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.410303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.843 [2024-08-11 12:59:14.410313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:22.843 [2024-08-11 12:59:14.410322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.410414] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:22.843 [2024-08-11 12:59:14.410430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.410440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:22.843 [2024-08-11 12:59:14.410450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:22.843 [2024-08-11 12:59:14.410459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.414512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.414552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.843 [2024-08-11 12:59:14.414567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.023 ms 00:17:22.843 [2024-08-11 12:59:14.414577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.414673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.843 [2024-08-11 12:59:14.414692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.843 [2024-08-11 12:59:14.414703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:22.843 [2024-08-11 12:59:14.414713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.843 [2024-08-11 12:59:14.415833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.843 [2024-08-11 12:59:14.417160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.887 ms, result 0 00:17:22.843 [2024-08-11 12:59:14.417974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.843 [2024-08-11 12:59:14.427223] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.204  Copying: 24/256 [MB] (24 MBps) Copying: 45/256 [MB] (21 MBps) Copying: 66/256 [MB] (21 MBps) Copying: 86/256 [MB] (20 MBps) Copying: 108/256 [MB] (21 MBps) Copying: 129/256 [MB] (21 MBps) Copying: 150/256 [MB] (20 MBps) Copying: 172/256 [MB] (21 MBps) Copying: 193/256 [MB] (21 MBps) Copying: 214/256 [MB] (21 MBps) Copying: 235/256 [MB] (20 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-08-11 12:59:26.622018] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.204 [2024-08-11 12:59:26.623335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.204 [2024-08-11 12:59:26.623384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.204 [2024-08-11 12:59:26.623407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.204 [2024-08-11 12:59:26.623422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.204 [2024-08-11 12:59:26.623461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:35.204 [2024-08-11 12:59:26.624002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.204 [2024-08-11 12:59:26.624045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.204 [2024-08-11 12:59:26.624073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:17:35.204 [2024-08-11 12:59:26.624087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.624506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.624534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.205 [2024-08-11 12:59:26.624551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:17:35.205 [2024-08-11 12:59:26.624565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.629756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.629798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.205 [2024-08-11 12:59:26.629827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.163 ms 00:17:35.205 [2024-08-11 12:59:26.629848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.638001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.638033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.205 [2024-08-11 12:59:26.638060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.098 ms 00:17:35.205 [2024-08-11 12:59:26.638069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.639561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.639613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.205 [2024-08-11 12:59:26.639642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:17:35.205 [2024-08-11 12:59:26.639651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.642700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.642957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.205 [2024-08-11 12:59:26.642984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:17:35.205 [2024-08-11 12:59:26.643005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.643141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.643159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.205 [2024-08-11 12:59:26.643172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:35.205 [2024-08-11 12:59:26.643182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.645207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.645254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:35.205 [2024-08-11 12:59:26.645281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:17:35.205 [2024-08-11 12:59:26.645289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.646792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.646839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:35.205 [2024-08-11 12:59:26.646882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:17:35.205 [2024-08-11 12:59:26.646907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.648219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.648284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.205 [2024-08-11 12:59:26.648310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:17:35.205 [2024-08-11 12:59:26.648319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.649732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.205 [2024-08-11 12:59:26.649977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.205 [2024-08-11 12:59:26.650003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:17:35.205 [2024-08-11 12:59:26.650015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.205 [2024-08-11 12:59:26.650063] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.205 [2024-08-11 12:59:26.650087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.205 [2024-08-11 12:59:26.650774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.650991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.206 [2024-08-11 12:59:26.651305] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.206 [2024-08-11 12:59:26.651314] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d72816ad-f32f-44ac-983e-8aec6b187201 00:17:35.206 [2024-08-11 12:59:26.651328] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.206 [2024-08-11 12:59:26.651337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.206 [2024-08-11 12:59:26.651345] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.206 [2024-08-11 12:59:26.651358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.206 [2024-08-11 12:59:26.651367] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.206 [2024-08-11 12:59:26.651376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.206 [2024-08-11 12:59:26.651385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.206 [2024-08-11 12:59:26.651393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.206 [2024-08-11 12:59:26.651401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.206 [2024-08-11 12:59:26.651410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.206 [2024-08-11 12:59:26.651420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.206 [2024-08-11 12:59:26.651430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:17:35.206 [2024-08-11 12:59:26.651439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.652812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.206 [2024-08-11 12:59:26.652837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.206 [2024-08-11 12:59:26.652850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:17:35.206 [2024-08-11 12:59:26.652875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.652968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.206 [2024-08-11 12:59:26.652982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.206 [2024-08-11 12:59:26.652994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:35.206 [2024-08-11 12:59:26.653209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.657924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.658130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.206 [2024-08-11 12:59:26.658156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.658169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.658259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.658288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.206 [2024-08-11 12:59:26.658299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.658308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.658368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.658385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.206 [2024-08-11 12:59:26.658395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.658415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.658477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.658491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.206 [2024-08-11 12:59:26.658511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.658520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.666429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.666482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.206 [2024-08-11 12:59:26.666512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.666521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.673261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.673322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.206 [2024-08-11 12:59:26.673352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.673362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.673437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.673452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.206 [2024-08-11 12:59:26.673462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.673471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.673500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.673512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.206 [2024-08-11 12:59:26.673521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.673530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.673609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.673630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.206 [2024-08-11 12:59:26.673649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.673658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.673700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.206 [2024-08-11 12:59:26.673715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.206 [2024-08-11 12:59:26.673725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.206 [2024-08-11 12:59:26.673734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.206 [2024-08-11 12:59:26.673786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.207 [2024-08-11 12:59:26.673799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.207 [2024-08-11 12:59:26.673815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.207 [2024-08-11 12:59:26.673824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.207 [2024-08-11 12:59:26.673931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.207 [2024-08-11 12:59:26.673973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.207 [2024-08-11 12:59:26.673985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.207 [2024-08-11 12:59:26.674003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.207 [2024-08-11 12:59:26.674187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.839 ms, result 0 00:17:35.465 00:17:35.465 00:17:35.465 12:59:26 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:36.033 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:36.033 12:59:27 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:36.033 12:59:27 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:36.033 12:59:27 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:36.033 12:59:27 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:36.033 12:59:27 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:36.033 12:59:27 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:36.033 12:59:27 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85064 00:17:36.033 12:59:27 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 85064 ']' 00:17:36.033 12:59:27 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 85064 00:17:36.033 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (85064) - No such process 00:17:36.033 Process with pid 85064 is not found 00:17:36.033 12:59:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 85064 is not found' 00:17:36.033 00:17:36.033 real 0m57.553s 00:17:36.033 user 1m19.728s 00:17:36.033 sys 0m6.080s 00:17:36.033 ************************************ 00:17:36.033 END TEST ftl_trim 00:17:36.033 ************************************ 00:17:36.033 12:59:27 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.033 12:59:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:36.033 12:59:27 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:36.033 12:59:27 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:36.033 12:59:27 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.033 12:59:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:36.033 ************************************ 00:17:36.033 START TEST ftl_restore 00:17:36.033 ************************************ 00:17:36.033 12:59:27 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:36.033 * Looking for test storage... 00:17:36.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.033 12:59:27 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:36.033 12:59:27 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:36.033 12:59:27 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.033 12:59:27 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.033 12:59:27 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:36.291 12:59:27 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:36.291 12:59:27 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.291 12:59:27 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:36.291 12:59:27 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:36.291 12:59:27 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.291 12:59:27 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.291 12:59:27 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:36.291 12:59:27 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.0JT0r3XItB 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:36.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=85301 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 85301 00:17:36.292 12:59:27 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 85301 ']' 00:17:36.292 12:59:27 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.292 12:59:27 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.292 12:59:27 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:36.292 12:59:27 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.292 12:59:27 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:36.292 12:59:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:36.292 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:36.292 [2024-08-11 12:59:27.774736] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:36.292 [2024-08-11 12:59:27.775080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85301 ] 00:17:36.550 [2024-08-11 12:59:27.924729] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.550 [2024-08-11 12:59:27.960501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.117 12:59:28 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:37.117 12:59:28 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:17:37.117 12:59:28 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:37.117 12:59:28 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:37.117 12:59:28 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:37.117 12:59:28 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:37.117 12:59:28 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:37.117 12:59:28 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:37.684 12:59:29 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:37.685 12:59:29 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:37.685 12:59:29 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:37.685 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:37.685 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:37.685 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:37.685 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:37.685 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:37.943 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:37.943 { 00:17:37.943 "name": "nvme0n1", 00:17:37.943 "aliases": [ 00:17:37.943 "af8857b0-18bb-4c78-9f7c-774664a433ed" 00:17:37.943 ], 00:17:37.943 "product_name": "NVMe disk", 00:17:37.943 "block_size": 4096, 00:17:37.943 "num_blocks": 1310720, 00:17:37.943 "uuid": "af8857b0-18bb-4c78-9f7c-774664a433ed", 00:17:37.943 "assigned_rate_limits": { 00:17:37.943 "rw_ios_per_sec": 0, 00:17:37.943 "rw_mbytes_per_sec": 0, 00:17:37.943 "r_mbytes_per_sec": 0, 00:17:37.943 "w_mbytes_per_sec": 0 00:17:37.943 }, 00:17:37.943 "claimed": true, 00:17:37.943 "claim_type": "read_many_write_one", 00:17:37.943 "zoned": false, 00:17:37.943 "supported_io_types": { 00:17:37.943 "read": true, 00:17:37.943 "write": true, 00:17:37.943 "unmap": true, 00:17:37.943 "flush": true, 00:17:37.943 "reset": true, 00:17:37.943 "nvme_admin": true, 00:17:37.943 "nvme_io": true, 00:17:37.943 "nvme_io_md": false, 00:17:37.943 "write_zeroes": true, 00:17:37.943 "zcopy": false, 00:17:37.943 "get_zone_info": false, 00:17:37.943 "zone_management": false, 00:17:37.944 "zone_append": false, 00:17:37.944 "compare": true, 00:17:37.944 "compare_and_write": false, 00:17:37.944 "abort": true, 00:17:37.944 "seek_hole": false, 00:17:37.944 "seek_data": false, 00:17:37.944 "copy": true, 00:17:37.944 "nvme_iov_md": false 00:17:37.944 }, 00:17:37.944 "driver_specific": { 00:17:37.944 "nvme": [ 00:17:37.944 { 00:17:37.944 "pci_address": "0000:00:11.0", 00:17:37.944 "trid": { 00:17:37.944 "trtype": "PCIe", 00:17:37.944 "traddr": "0000:00:11.0" 00:17:37.944 }, 00:17:37.944 "ctrlr_data": { 00:17:37.944 "cntlid": 0, 00:17:37.944 "vendor_id": "0x1b36", 00:17:37.944 "model_number": "QEMU NVMe Ctrl", 00:17:37.944 "serial_number": "12341", 00:17:37.944 "firmware_revision": "8.0.0", 00:17:37.944 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:37.944 "oacs": { 00:17:37.944 "security": 0, 00:17:37.944 "format": 1, 00:17:37.944 "firmware": 0, 00:17:37.944 "ns_manage": 1 00:17:37.944 }, 00:17:37.944 "multi_ctrlr": false, 00:17:37.944 "ana_reporting": false 00:17:37.944 }, 00:17:37.944 "vs": { 00:17:37.944 "nvme_version": "1.4" 00:17:37.944 }, 00:17:37.944 "ns_data": { 00:17:37.944 "id": 1, 00:17:37.944 "can_share": false 00:17:37.944 } 00:17:37.944 } 00:17:37.944 ], 00:17:37.944 "mp_policy": "active_passive" 00:17:37.944 } 00:17:37.944 } 00:17:37.944 ]' 00:17:37.944 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:37.944 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:37.944 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:37.944 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:37.944 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:37.944 12:59:29 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:17:37.944 12:59:29 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:37.944 12:59:29 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:37.944 12:59:29 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:37.944 12:59:29 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:37.944 12:59:29 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:38.203 12:59:29 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7b318b06-58c0-48c6-8115-6e36e6847ed9 00:17:38.203 12:59:29 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:38.203 12:59:29 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7b318b06-58c0-48c6-8115-6e36e6847ed9 00:17:38.769 12:59:30 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:39.027 12:59:30 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=f969fa8d-39ee-430b-8cd5-9c56e25d644b 00:17:39.027 12:59:30 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f969fa8d-39ee-430b-8cd5-9c56e25d644b 00:17:39.285 12:59:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=41bfba7c-6783-4684-a036-6e5746daee66 00:17:39.285 12:59:30 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:39.285 12:59:30 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 41bfba7c-6783-4684-a036-6e5746daee66 00:17:39.285 12:59:30 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:39.285 12:59:30 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:39.285 12:59:30 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=41bfba7c-6783-4684-a036-6e5746daee66 00:17:39.285 12:59:30 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:39.285 12:59:30 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 41bfba7c-6783-4684-a036-6e5746daee66 00:17:39.285 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=41bfba7c-6783-4684-a036-6e5746daee66 00:17:39.285 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:39.285 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:39.285 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:39.285 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41bfba7c-6783-4684-a036-6e5746daee66 00:17:39.548 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:39.548 { 00:17:39.548 "name": "41bfba7c-6783-4684-a036-6e5746daee66", 00:17:39.548 "aliases": [ 00:17:39.548 "lvs/nvme0n1p0" 00:17:39.548 ], 00:17:39.548 "product_name": "Logical Volume", 00:17:39.548 "block_size": 4096, 00:17:39.548 "num_blocks": 26476544, 00:17:39.548 "uuid": "41bfba7c-6783-4684-a036-6e5746daee66", 00:17:39.548 "assigned_rate_limits": { 00:17:39.548 "rw_ios_per_sec": 0, 00:17:39.548 "rw_mbytes_per_sec": 0, 00:17:39.548 "r_mbytes_per_sec": 0, 00:17:39.548 "w_mbytes_per_sec": 0 00:17:39.548 }, 00:17:39.548 "claimed": false, 00:17:39.548 "zoned": false, 00:17:39.548 "supported_io_types": { 00:17:39.548 "read": true, 00:17:39.548 "write": true, 00:17:39.548 "unmap": true, 00:17:39.548 "flush": false, 00:17:39.548 "reset": true, 00:17:39.548 "nvme_admin": false, 00:17:39.548 "nvme_io": false, 00:17:39.548 "nvme_io_md": false, 00:17:39.548 "write_zeroes": true, 00:17:39.548 "zcopy": false, 00:17:39.548 "get_zone_info": false, 00:17:39.548 "zone_management": false, 00:17:39.548 "zone_append": false, 00:17:39.548 "compare": false, 00:17:39.548 "compare_and_write": false, 00:17:39.548 "abort": false, 00:17:39.548 "seek_hole": true, 00:17:39.548 "seek_data": true, 00:17:39.548 "copy": false, 00:17:39.548 "nvme_iov_md": false 00:17:39.548 }, 00:17:39.548 "driver_specific": { 00:17:39.548 "lvol": { 00:17:39.548 "lvol_store_uuid": "f969fa8d-39ee-430b-8cd5-9c56e25d644b", 00:17:39.548 "base_bdev": "nvme0n1", 00:17:39.548 "thin_provision": true, 00:17:39.548 "num_allocated_clusters": 0, 00:17:39.548 "snapshot": false, 00:17:39.548 "clone": false, 00:17:39.548 "esnap_clone": false 00:17:39.548 } 00:17:39.548 } 00:17:39.548 } 00:17:39.548 ]' 00:17:39.548 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:39.548 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:39.548 12:59:30 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:39.548 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:39.548 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:39.548 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:39.548 12:59:31 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:39.549 12:59:31 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:39.549 12:59:31 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:39.819 12:59:31 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:39.819 12:59:31 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:39.819 12:59:31 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 41bfba7c-6783-4684-a036-6e5746daee66 00:17:39.819 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=41bfba7c-6783-4684-a036-6e5746daee66 00:17:39.819 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:39.819 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:39.819 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:39.819 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41bfba7c-6783-4684-a036-6e5746daee66 00:17:40.077 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:40.077 { 00:17:40.077 "name": "41bfba7c-6783-4684-a036-6e5746daee66", 00:17:40.077 "aliases": [ 00:17:40.077 "lvs/nvme0n1p0" 00:17:40.077 ], 00:17:40.077 "product_name": "Logical Volume", 00:17:40.077 "block_size": 4096, 00:17:40.077 "num_blocks": 26476544, 00:17:40.077 "uuid": "41bfba7c-6783-4684-a036-6e5746daee66", 00:17:40.077 "assigned_rate_limits": { 00:17:40.077 "rw_ios_per_sec": 0, 00:17:40.078 "rw_mbytes_per_sec": 0, 00:17:40.078 "r_mbytes_per_sec": 0, 00:17:40.078 "w_mbytes_per_sec": 0 00:17:40.078 }, 00:17:40.078 "claimed": false, 00:17:40.078 "zoned": false, 00:17:40.078 "supported_io_types": { 00:17:40.078 "read": true, 00:17:40.078 "write": true, 00:17:40.078 "unmap": true, 00:17:40.078 "flush": false, 00:17:40.078 "reset": true, 00:17:40.078 "nvme_admin": false, 00:17:40.078 "nvme_io": false, 00:17:40.078 "nvme_io_md": false, 00:17:40.078 "write_zeroes": true, 00:17:40.078 "zcopy": false, 00:17:40.078 "get_zone_info": false, 00:17:40.078 "zone_management": false, 00:17:40.078 "zone_append": false, 00:17:40.078 "compare": false, 00:17:40.078 "compare_and_write": false, 00:17:40.078 "abort": false, 00:17:40.078 "seek_hole": true, 00:17:40.078 "seek_data": true, 00:17:40.078 "copy": false, 00:17:40.078 "nvme_iov_md": false 00:17:40.078 }, 00:17:40.078 "driver_specific": { 00:17:40.078 "lvol": { 00:17:40.078 "lvol_store_uuid": "f969fa8d-39ee-430b-8cd5-9c56e25d644b", 00:17:40.078 "base_bdev": "nvme0n1", 00:17:40.078 "thin_provision": true, 00:17:40.078 "num_allocated_clusters": 0, 00:17:40.078 "snapshot": false, 00:17:40.078 "clone": false, 00:17:40.078 "esnap_clone": false 00:17:40.078 } 00:17:40.078 } 00:17:40.078 } 00:17:40.078 ]' 00:17:40.078 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:40.336 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:40.336 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:40.336 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:40.336 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:40.336 12:59:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:40.336 12:59:31 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:40.336 12:59:31 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:40.595 12:59:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:40.595 12:59:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 41bfba7c-6783-4684-a036-6e5746daee66 00:17:40.595 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=41bfba7c-6783-4684-a036-6e5746daee66 00:17:40.595 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:40.595 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:40.595 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:40.595 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41bfba7c-6783-4684-a036-6e5746daee66 00:17:40.854 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:40.854 { 00:17:40.854 "name": "41bfba7c-6783-4684-a036-6e5746daee66", 00:17:40.854 "aliases": [ 00:17:40.854 "lvs/nvme0n1p0" 00:17:40.854 ], 00:17:40.854 "product_name": "Logical Volume", 00:17:40.854 "block_size": 4096, 00:17:40.854 "num_blocks": 26476544, 00:17:40.854 "uuid": "41bfba7c-6783-4684-a036-6e5746daee66", 00:17:40.854 "assigned_rate_limits": { 00:17:40.854 "rw_ios_per_sec": 0, 00:17:40.854 "rw_mbytes_per_sec": 0, 00:17:40.854 "r_mbytes_per_sec": 0, 00:17:40.854 "w_mbytes_per_sec": 0 00:17:40.854 }, 00:17:40.854 "claimed": false, 00:17:40.854 "zoned": false, 00:17:40.854 "supported_io_types": { 00:17:40.854 "read": true, 00:17:40.854 "write": true, 00:17:40.854 "unmap": true, 00:17:40.855 "flush": false, 00:17:40.855 "reset": true, 00:17:40.855 "nvme_admin": false, 00:17:40.855 "nvme_io": false, 00:17:40.855 "nvme_io_md": false, 00:17:40.855 "write_zeroes": true, 00:17:40.855 "zcopy": false, 00:17:40.855 "get_zone_info": false, 00:17:40.855 "zone_management": false, 00:17:40.855 "zone_append": false, 00:17:40.855 "compare": false, 00:17:40.855 "compare_and_write": false, 00:17:40.855 "abort": false, 00:17:40.855 "seek_hole": true, 00:17:40.855 "seek_data": true, 00:17:40.855 "copy": false, 00:17:40.855 "nvme_iov_md": false 00:17:40.855 }, 00:17:40.855 "driver_specific": { 00:17:40.855 "lvol": { 00:17:40.855 "lvol_store_uuid": "f969fa8d-39ee-430b-8cd5-9c56e25d644b", 00:17:40.855 "base_bdev": "nvme0n1", 00:17:40.855 "thin_provision": true, 00:17:40.855 "num_allocated_clusters": 0, 00:17:40.855 "snapshot": false, 00:17:40.855 "clone": false, 00:17:40.855 "esnap_clone": false 00:17:40.855 } 00:17:40.855 } 00:17:40.855 } 00:17:40.855 ]' 00:17:40.855 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:40.855 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:40.855 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:41.114 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:41.114 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:41.114 12:59:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:41.114 12:59:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:41.114 12:59:32 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 41bfba7c-6783-4684-a036-6e5746daee66 --l2p_dram_limit 10' 00:17:41.114 12:59:32 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:41.114 12:59:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:41.114 12:59:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:41.114 12:59:32 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:41.114 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:41.114 12:59:32 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 41bfba7c-6783-4684-a036-6e5746daee66 --l2p_dram_limit 10 -c nvc0n1p0 00:17:41.373 [2024-08-11 12:59:32.780562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.373 [2024-08-11 12:59:32.780842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.373 [2024-08-11 12:59:32.780880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.373 [2024-08-11 12:59:32.780931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.373 [2024-08-11 12:59:32.781043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.373 [2024-08-11 12:59:32.781063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.374 [2024-08-11 12:59:32.781078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:41.374 [2024-08-11 12:59:32.781090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.781125] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.374 [2024-08-11 12:59:32.781496] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.374 [2024-08-11 12:59:32.781523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.781534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.374 [2024-08-11 12:59:32.781548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:41.374 [2024-08-11 12:59:32.781559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.781687] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9c755973-9dbf-4377-b4a2-419087894d03 00:17:41.374 [2024-08-11 12:59:32.782669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.782695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:41.374 [2024-08-11 12:59:32.782708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:41.374 [2024-08-11 12:59:32.782721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.787070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.787307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.374 [2024-08-11 12:59:32.787346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:17:41.374 [2024-08-11 12:59:32.787369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.787471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.787494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.374 [2024-08-11 12:59:32.787507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:41.374 [2024-08-11 12:59:32.787520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.787598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.787622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.374 [2024-08-11 12:59:32.787635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:41.374 [2024-08-11 12:59:32.787652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.787683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.374 [2024-08-11 12:59:32.789241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.789275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.374 [2024-08-11 12:59:32.789309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:17:41.374 [2024-08-11 12:59:32.789320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.789364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.789378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.374 [2024-08-11 12:59:32.789391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:41.374 [2024-08-11 12:59:32.789413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.789455] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:41.374 [2024-08-11 12:59:32.789611] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.374 [2024-08-11 12:59:32.789634] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.374 [2024-08-11 12:59:32.789649] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:41.374 [2024-08-11 12:59:32.789671] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.374 [2024-08-11 12:59:32.789685] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.374 [2024-08-11 12:59:32.789699] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:41.374 [2024-08-11 12:59:32.789725] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.374 [2024-08-11 12:59:32.789737] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.374 [2024-08-11 12:59:32.789748] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.374 [2024-08-11 12:59:32.789762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.789773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.374 [2024-08-11 12:59:32.789786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:17:41.374 [2024-08-11 12:59:32.789805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.789909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.374 [2024-08-11 12:59:32.789929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.374 [2024-08-11 12:59:32.789946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:41.374 [2024-08-11 12:59:32.789957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.374 [2024-08-11 12:59:32.790070] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.374 [2024-08-11 12:59:32.790085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.374 [2024-08-11 12:59:32.790100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.374 [2024-08-11 12:59:32.790115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.374 [2024-08-11 12:59:32.790156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:41.374 [2024-08-11 12:59:32.790179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.374 [2024-08-11 12:59:32.790192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.374 [2024-08-11 12:59:32.790214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.374 [2024-08-11 12:59:32.790225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:41.374 [2024-08-11 12:59:32.790236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.374 [2024-08-11 12:59:32.790247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.374 [2024-08-11 12:59:32.790262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:41.374 [2024-08-11 12:59:32.790272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.374 [2024-08-11 12:59:32.790294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:41.374 [2024-08-11 12:59:32.790308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.374 [2024-08-11 12:59:32.790331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.374 [2024-08-11 12:59:32.790353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.374 [2024-08-11 12:59:32.790363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.374 [2024-08-11 12:59:32.790385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.374 [2024-08-11 12:59:32.790397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.374 [2024-08-11 12:59:32.790418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.374 [2024-08-11 12:59:32.790429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.374 [2024-08-11 12:59:32.790455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.374 [2024-08-11 12:59:32.790467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.374 [2024-08-11 12:59:32.790489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.374 [2024-08-11 12:59:32.790499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:41.374 [2024-08-11 12:59:32.790511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.374 [2024-08-11 12:59:32.790521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.374 [2024-08-11 12:59:32.790534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:41.374 [2024-08-11 12:59:32.790544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.374 [2024-08-11 12:59:32.790566] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:41.374 [2024-08-11 12:59:32.790578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790587] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.374 [2024-08-11 12:59:32.790600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.374 [2024-08-11 12:59:32.790613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.374 [2024-08-11 12:59:32.790637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.374 [2024-08-11 12:59:32.790649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.374 [2024-08-11 12:59:32.790671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.374 [2024-08-11 12:59:32.790682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.374 [2024-08-11 12:59:32.790695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.374 [2024-08-11 12:59:32.790706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.374 [2024-08-11 12:59:32.790720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.375 [2024-08-11 12:59:32.790736] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.375 [2024-08-11 12:59:32.790759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.375 [2024-08-11 12:59:32.790772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:41.375 [2024-08-11 12:59:32.790785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:41.375 [2024-08-11 12:59:32.790795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:41.375 [2024-08-11 12:59:32.790808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:41.375 [2024-08-11 12:59:32.790819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:41.375 [2024-08-11 12:59:32.790832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:41.375 [2024-08-11 12:59:32.790843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:41.375 [2024-08-11 12:59:32.790857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:41.375 [2024-08-11 12:59:32.790868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:41.375 [2024-08-11 12:59:32.791177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:41.375 [2024-08-11 12:59:32.791259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:41.375 [2024-08-11 12:59:32.791412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:41.375 [2024-08-11 12:59:32.791473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:41.375 [2024-08-11 12:59:32.791603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:41.375 [2024-08-11 12:59:32.791727] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.375 [2024-08-11 12:59:32.791920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.375 [2024-08-11 12:59:32.792083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.375 [2024-08-11 12:59:32.792220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.375 [2024-08-11 12:59:32.792304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.375 [2024-08-11 12:59:32.792430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.375 [2024-08-11 12:59:32.792555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.375 [2024-08-11 12:59:32.792701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.375 [2024-08-11 12:59:32.792751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:17:41.375 [2024-08-11 12:59:32.792837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.375 [2024-08-11 12:59:32.793026] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:41.375 [2024-08-11 12:59:32.793166] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:43.275 [2024-08-11 12:59:34.694767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.694845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:43.275 [2024-08-11 12:59:34.694888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1901.752 ms 00:17:43.275 [2024-08-11 12:59:34.694909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.702465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.702545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.275 [2024-08-11 12:59:34.702565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.461 ms 00:17:43.275 [2024-08-11 12:59:34.702580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.702709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.702731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:43.275 [2024-08-11 12:59:34.702745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:43.275 [2024-08-11 12:59:34.702759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.710937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.711003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.275 [2024-08-11 12:59:34.711032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.087 ms 00:17:43.275 [2024-08-11 12:59:34.711048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.711102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.711119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.275 [2024-08-11 12:59:34.711133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:43.275 [2024-08-11 12:59:34.711147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.711505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.711528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.275 [2024-08-11 12:59:34.711545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:43.275 [2024-08-11 12:59:34.711559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.711707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.711736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.275 [2024-08-11 12:59:34.711749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:43.275 [2024-08-11 12:59:34.711763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.717295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.717351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.275 [2024-08-11 12:59:34.717373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.505 ms 00:17:43.275 [2024-08-11 12:59:34.717387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.726545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:43.275 [2024-08-11 12:59:34.729262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.729301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:43.275 [2024-08-11 12:59:34.729324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.732 ms 00:17:43.275 [2024-08-11 12:59:34.729350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.777924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.778177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:43.275 [2024-08-11 12:59:34.778217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.518 ms 00:17:43.275 [2024-08-11 12:59:34.778232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.778470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.778490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:43.275 [2024-08-11 12:59:34.778507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:17:43.275 [2024-08-11 12:59:34.778519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.782058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.782101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:43.275 [2024-08-11 12:59:34.782122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:17:43.275 [2024-08-11 12:59:34.782135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.785082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.785245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:43.275 [2024-08-11 12:59:34.785279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:17:43.275 [2024-08-11 12:59:34.785292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.785658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.785689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:43.275 [2024-08-11 12:59:34.785706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:43.275 [2024-08-11 12:59:34.785718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.814096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.814168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:43.275 [2024-08-11 12:59:34.814193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.313 ms 00:17:43.275 [2024-08-11 12:59:34.814217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.818340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.818386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:43.275 [2024-08-11 12:59:34.818408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:17:43.275 [2024-08-11 12:59:34.818421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.821750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.821793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:43.275 [2024-08-11 12:59:34.821813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:17:43.275 [2024-08-11 12:59:34.821825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.825649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.825695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:43.275 [2024-08-11 12:59:34.825716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:17:43.275 [2024-08-11 12:59:34.825729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.825789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.825816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:43.275 [2024-08-11 12:59:34.825832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:43.275 [2024-08-11 12:59:34.825844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.825950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.275 [2024-08-11 12:59:34.825970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:43.275 [2024-08-11 12:59:34.825985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:43.275 [2024-08-11 12:59:34.826005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.275 [2024-08-11 12:59:34.827085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2046.018 ms, result 0 00:17:43.275 { 00:17:43.275 "name": "ftl0", 00:17:43.275 "uuid": "9c755973-9dbf-4377-b4a2-419087894d03" 00:17:43.275 } 00:17:43.275 12:59:34 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:43.275 12:59:34 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:43.842 12:59:35 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:43.842 12:59:35 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:43.842 [2024-08-11 12:59:35.427535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.842 [2024-08-11 12:59:35.427829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:43.842 [2024-08-11 12:59:35.427890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:43.842 [2024-08-11 12:59:35.427912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.842 [2024-08-11 12:59:35.427962] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.842 [2024-08-11 12:59:35.428421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.842 [2024-08-11 12:59:35.428439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:43.842 [2024-08-11 12:59:35.428457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:17:43.842 [2024-08-11 12:59:35.428469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.842 [2024-08-11 12:59:35.428765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.842 [2024-08-11 12:59:35.428789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:43.842 [2024-08-11 12:59:35.428808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:43.842 [2024-08-11 12:59:35.428820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.842 [2024-08-11 12:59:35.432316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.842 [2024-08-11 12:59:35.432357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:43.842 [2024-08-11 12:59:35.432375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:17:43.842 [2024-08-11 12:59:35.432388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.439132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.103 [2024-08-11 12:59:35.439169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:44.103 [2024-08-11 12:59:35.439187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.714 ms 00:17:44.103 [2024-08-11 12:59:35.439199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.440635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.103 [2024-08-11 12:59:35.440678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.103 [2024-08-11 12:59:35.440701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:17:44.103 [2024-08-11 12:59:35.440713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.444360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.103 [2024-08-11 12:59:35.444408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.103 [2024-08-11 12:59:35.444429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:17:44.103 [2024-08-11 12:59:35.444441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.444592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.103 [2024-08-11 12:59:35.444611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.103 [2024-08-11 12:59:35.444628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:44.103 [2024-08-11 12:59:35.444639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.446412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.103 [2024-08-11 12:59:35.446575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:44.103 [2024-08-11 12:59:35.446608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:17:44.103 [2024-08-11 12:59:35.446621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.448004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.103 [2024-08-11 12:59:35.448042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:44.103 [2024-08-11 12:59:35.448063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:17:44.103 [2024-08-11 12:59:35.448074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.449157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.103 [2024-08-11 12:59:35.449195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.103 [2024-08-11 12:59:35.449213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:17:44.103 [2024-08-11 12:59:35.449224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.450490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.103 [2024-08-11 12:59:35.450539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.103 [2024-08-11 12:59:35.450558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:17:44.103 [2024-08-11 12:59:35.450570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.103 [2024-08-11 12:59:35.450616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.103 [2024-08-11 12:59:35.450648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.450991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.103 [2024-08-11 12:59:35.451192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.451994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.452008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.452020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.452035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.104 [2024-08-11 12:59:35.452057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.104 [2024-08-11 12:59:35.452074] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c755973-9dbf-4377-b4a2-419087894d03 00:17:44.104 [2024-08-11 12:59:35.452086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.104 [2024-08-11 12:59:35.452100] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.104 [2024-08-11 12:59:35.452114] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.104 [2024-08-11 12:59:35.452128] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.104 [2024-08-11 12:59:35.452139] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.104 [2024-08-11 12:59:35.452155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.104 [2024-08-11 12:59:35.452167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.104 [2024-08-11 12:59:35.452180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.104 [2024-08-11 12:59:35.452190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.104 [2024-08-11 12:59:35.452204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.104 [2024-08-11 12:59:35.452215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.104 [2024-08-11 12:59:35.452230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.592 ms 00:17:44.104 [2024-08-11 12:59:35.452241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.104 [2024-08-11 12:59:35.453640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.104 [2024-08-11 12:59:35.453671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.104 [2024-08-11 12:59:35.453690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:17:44.104 [2024-08-11 12:59:35.453702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.104 [2024-08-11 12:59:35.453808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.104 [2024-08-11 12:59:35.453826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.104 [2024-08-11 12:59:35.453842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:44.104 [2024-08-11 12:59:35.453853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.104 [2024-08-11 12:59:35.459257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.104 [2024-08-11 12:59:35.459318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.104 [2024-08-11 12:59:35.459341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.104 [2024-08-11 12:59:35.459353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.104 [2024-08-11 12:59:35.459426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.104 [2024-08-11 12:59:35.459441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.104 [2024-08-11 12:59:35.459455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.104 [2024-08-11 12:59:35.459467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.104 [2024-08-11 12:59:35.459572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.104 [2024-08-11 12:59:35.459595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.104 [2024-08-11 12:59:35.459614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.104 [2024-08-11 12:59:35.459626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.104 [2024-08-11 12:59:35.459655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.104 [2024-08-11 12:59:35.459670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.104 [2024-08-11 12:59:35.459684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.104 [2024-08-11 12:59:35.459695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.468233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.105 [2024-08-11 12:59:35.468302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.105 [2024-08-11 12:59:35.468323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.105 [2024-08-11 12:59:35.468336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.474752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.105 [2024-08-11 12:59:35.474815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.105 [2024-08-11 12:59:35.474838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.105 [2024-08-11 12:59:35.474850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.474991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.105 [2024-08-11 12:59:35.475017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.105 [2024-08-11 12:59:35.475040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.105 [2024-08-11 12:59:35.475051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.475107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.105 [2024-08-11 12:59:35.475124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.105 [2024-08-11 12:59:35.475139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.105 [2024-08-11 12:59:35.475161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.475262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.105 [2024-08-11 12:59:35.475281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.105 [2024-08-11 12:59:35.475300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.105 [2024-08-11 12:59:35.475311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.475371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.105 [2024-08-11 12:59:35.475396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.105 [2024-08-11 12:59:35.475412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.105 [2024-08-11 12:59:35.475423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.475474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.105 [2024-08-11 12:59:35.475496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.105 [2024-08-11 12:59:35.475513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.105 [2024-08-11 12:59:35.475528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.475594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.105 [2024-08-11 12:59:35.475613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.105 [2024-08-11 12:59:35.475628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.105 [2024-08-11 12:59:35.475640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.105 [2024-08-11 12:59:35.475811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.220 ms, result 0 00:17:44.105 true 00:17:44.105 12:59:35 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 85301 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 85301 ']' 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 85301 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85301 00:17:44.105 killing process with pid 85301 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85301' 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 85301 00:17:44.105 12:59:35 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 85301 00:17:47.390 12:59:38 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:52.653 262144+0 records in 00:17:52.653 262144+0 records out 00:17:52.653 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.81435 s, 223 MB/s 00:17:52.653 12:59:43 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:54.027 12:59:45 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:54.027 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:54.027 [2024-08-11 12:59:45.447304] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:54.027 [2024-08-11 12:59:45.447447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85515 ] 00:17:54.027 [2024-08-11 12:59:45.586479] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.286 [2024-08-11 12:59:45.627604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.286 [2024-08-11 12:59:45.716879] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.286 [2024-08-11 12:59:45.716976] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.286 [2024-08-11 12:59:45.873691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.286 [2024-08-11 12:59:45.873782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.286 [2024-08-11 12:59:45.873817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:54.286 [2024-08-11 12:59:45.873830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.286 [2024-08-11 12:59:45.873954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.286 [2024-08-11 12:59:45.873976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.286 [2024-08-11 12:59:45.873999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:54.286 [2024-08-11 12:59:45.874011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.286 [2024-08-11 12:59:45.874045] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.286 [2024-08-11 12:59:45.874341] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.286 [2024-08-11 12:59:45.874391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.286 [2024-08-11 12:59:45.874404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.286 [2024-08-11 12:59:45.874416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:17:54.286 [2024-08-11 12:59:45.874431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.286 [2024-08-11 12:59:45.875632] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.286 [2024-08-11 12:59:45.878011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.286 [2024-08-11 12:59:45.878055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.286 [2024-08-11 12:59:45.878072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.385 ms 00:17:54.286 [2024-08-11 12:59:45.878084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.286 [2024-08-11 12:59:45.878156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.286 [2024-08-11 12:59:45.878176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.286 [2024-08-11 12:59:45.878190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:54.286 [2024-08-11 12:59:45.878220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.546 [2024-08-11 12:59:45.883006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.546 [2024-08-11 12:59:45.883055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.546 [2024-08-11 12:59:45.883071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:17:54.546 [2024-08-11 12:59:45.883082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.546 [2024-08-11 12:59:45.883207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.546 [2024-08-11 12:59:45.883243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.547 [2024-08-11 12:59:45.883255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:54.547 [2024-08-11 12:59:45.883269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.547 [2024-08-11 12:59:45.883359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.547 [2024-08-11 12:59:45.883377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.547 [2024-08-11 12:59:45.883395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:54.547 [2024-08-11 12:59:45.883407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.547 [2024-08-11 12:59:45.883448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.547 [2024-08-11 12:59:45.884980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.547 [2024-08-11 12:59:45.885028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.547 [2024-08-11 12:59:45.885043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:17:54.547 [2024-08-11 12:59:45.885060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.547 [2024-08-11 12:59:45.885103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.547 [2024-08-11 12:59:45.885118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.547 [2024-08-11 12:59:45.885131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:54.547 [2024-08-11 12:59:45.885142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.547 [2024-08-11 12:59:45.885170] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.547 [2024-08-11 12:59:45.885200] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:54.547 [2024-08-11 12:59:45.885253] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.547 [2024-08-11 12:59:45.885297] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:54.547 [2024-08-11 12:59:45.885426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.547 [2024-08-11 12:59:45.885458] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.547 [2024-08-11 12:59:45.885471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:54.547 [2024-08-11 12:59:45.885485] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.547 [2024-08-11 12:59:45.885497] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.547 [2024-08-11 12:59:45.885508] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:54.547 [2024-08-11 12:59:45.885518] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.547 [2024-08-11 12:59:45.885528] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.547 [2024-08-11 12:59:45.885542] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.547 [2024-08-11 12:59:45.885553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.547 [2024-08-11 12:59:45.885564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.547 [2024-08-11 12:59:45.885574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:17:54.547 [2024-08-11 12:59:45.885584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.547 [2024-08-11 12:59:45.885687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.547 [2024-08-11 12:59:45.885712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.547 [2024-08-11 12:59:45.885729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:54.547 [2024-08-11 12:59:45.885747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.547 [2024-08-11 12:59:45.885875] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.547 [2024-08-11 12:59:45.885902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.547 [2024-08-11 12:59:45.885915] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.547 [2024-08-11 12:59:45.885963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.547 [2024-08-11 12:59:45.885979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.547 [2024-08-11 12:59:45.885989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.547 [2024-08-11 12:59:45.885999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:54.547 [2024-08-11 12:59:45.886009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.547 [2024-08-11 12:59:45.886020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.547 [2024-08-11 12:59:45.886040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.547 [2024-08-11 12:59:45.886050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:54.547 [2024-08-11 12:59:45.886060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.547 [2024-08-11 12:59:45.886070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.547 [2024-08-11 12:59:45.886085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:54.547 [2024-08-11 12:59:45.886097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.547 [2024-08-11 12:59:45.886117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:54.547 [2024-08-11 12:59:45.886127] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.547 [2024-08-11 12:59:45.886157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.547 [2024-08-11 12:59:45.886177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.547 [2024-08-11 12:59:45.886187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.547 [2024-08-11 12:59:45.886206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.547 [2024-08-11 12:59:45.886216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.547 [2024-08-11 12:59:45.886236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.547 [2024-08-11 12:59:45.886247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.547 [2024-08-11 12:59:45.886272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.547 [2024-08-11 12:59:45.886296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.547 [2024-08-11 12:59:45.886315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.547 [2024-08-11 12:59:45.886325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:54.547 [2024-08-11 12:59:45.886350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.547 [2024-08-11 12:59:45.886360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.547 [2024-08-11 12:59:45.886370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:54.547 [2024-08-11 12:59:45.886379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.547 [2024-08-11 12:59:45.886399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:54.547 [2024-08-11 12:59:45.886408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886418] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.547 [2024-08-11 12:59:45.886429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.547 [2024-08-11 12:59:45.886439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.547 [2024-08-11 12:59:45.886453] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.547 [2024-08-11 12:59:45.886466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.547 [2024-08-11 12:59:45.886477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.547 [2024-08-11 12:59:45.886486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.547 [2024-08-11 12:59:45.886497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.547 [2024-08-11 12:59:45.886506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.547 [2024-08-11 12:59:45.886517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.547 [2024-08-11 12:59:45.886528] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.547 [2024-08-11 12:59:45.886542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.547 [2024-08-11 12:59:45.886564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:54.547 [2024-08-11 12:59:45.886575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:54.547 [2024-08-11 12:59:45.886586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:54.547 [2024-08-11 12:59:45.886597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:54.547 [2024-08-11 12:59:45.886608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:54.547 [2024-08-11 12:59:45.886619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:54.547 [2024-08-11 12:59:45.886630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:54.547 [2024-08-11 12:59:45.886644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:54.547 [2024-08-11 12:59:45.886656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:54.547 [2024-08-11 12:59:45.886676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:54.547 [2024-08-11 12:59:45.886687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:54.548 [2024-08-11 12:59:45.886708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:54.548 [2024-08-11 12:59:45.886719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:54.548 [2024-08-11 12:59:45.886731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:54.548 [2024-08-11 12:59:45.886741] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.548 [2024-08-11 12:59:45.886757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.548 [2024-08-11 12:59:45.886769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.548 [2024-08-11 12:59:45.886781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.548 [2024-08-11 12:59:45.886792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.548 [2024-08-11 12:59:45.886802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.548 [2024-08-11 12:59:45.886814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.886825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.548 [2024-08-11 12:59:45.886837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:17:54.548 [2024-08-11 12:59:45.886851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.909056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.909136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.548 [2024-08-11 12:59:45.909162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.120 ms 00:17:54.548 [2024-08-11 12:59:45.909177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.909336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.909362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.548 [2024-08-11 12:59:45.909379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:54.548 [2024-08-11 12:59:45.909394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.918521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.918578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.548 [2024-08-11 12:59:45.918626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.992 ms 00:17:54.548 [2024-08-11 12:59:45.918636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.918697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.918713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.548 [2024-08-11 12:59:45.918725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.548 [2024-08-11 12:59:45.918735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.919139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.919160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.548 [2024-08-11 12:59:45.919173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:17:54.548 [2024-08-11 12:59:45.919189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.919424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.919453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.548 [2024-08-11 12:59:45.919466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:54.548 [2024-08-11 12:59:45.919476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.924663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.924705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.548 [2024-08-11 12:59:45.924735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.137 ms 00:17:54.548 [2024-08-11 12:59:45.924746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.927293] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:54.548 [2024-08-11 12:59:45.927338] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.548 [2024-08-11 12:59:45.927372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.927384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.548 [2024-08-11 12:59:45.927396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:17:54.548 [2024-08-11 12:59:45.927424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.943987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.944042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.548 [2024-08-11 12:59:45.944060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.504 ms 00:17:54.548 [2024-08-11 12:59:45.944071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.946121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.946164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.548 [2024-08-11 12:59:45.946181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:17:54.548 [2024-08-11 12:59:45.946192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.947801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.948018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.548 [2024-08-11 12:59:45.948047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:17:54.548 [2024-08-11 12:59:45.948059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.948519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.948552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.548 [2024-08-11 12:59:45.948577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:17:54.548 [2024-08-11 12:59:45.948607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.966337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.966642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.548 [2024-08-11 12:59:45.966674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.705 ms 00:17:54.548 [2024-08-11 12:59:45.966703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.975677] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:54.548 [2024-08-11 12:59:45.978639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.978674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.548 [2024-08-11 12:59:45.978705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.856 ms 00:17:54.548 [2024-08-11 12:59:45.978725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.978828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.978889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.548 [2024-08-11 12:59:45.978903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.548 [2024-08-11 12:59:45.978917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.979050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.979069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.548 [2024-08-11 12:59:45.979082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:54.548 [2024-08-11 12:59:45.979093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.979135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.979164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.548 [2024-08-11 12:59:45.979176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.548 [2024-08-11 12:59:45.979187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.979248] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.548 [2024-08-11 12:59:45.979265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.979276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.548 [2024-08-11 12:59:45.979287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:54.548 [2024-08-11 12:59:45.979297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.983178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.983266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.548 [2024-08-11 12:59:45.983314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.844 ms 00:17:54.548 [2024-08-11 12:59:45.983325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.983401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.548 [2024-08-11 12:59:45.983418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.548 [2024-08-11 12:59:45.983430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:54.548 [2024-08-11 12:59:45.983439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.548 [2024-08-11 12:59:45.984689] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.473 ms, result 0 00:18:34.918  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (26 MBps) Copying: 104/1024 [MB] (26 MBps) Copying: 130/1024 [MB] (25 MBps) Copying: 156/1024 [MB] (25 MBps) Copying: 182/1024 [MB] (25 MBps) Copying: 208/1024 [MB] (26 MBps) Copying: 234/1024 [MB] (25 MBps) Copying: 261/1024 [MB] (26 MBps) Copying: 285/1024 [MB] (24 MBps) Copying: 309/1024 [MB] (23 MBps) Copying: 333/1024 [MB] (23 MBps) Copying: 358/1024 [MB] (25 MBps) Copying: 385/1024 [MB] (26 MBps) Copying: 412/1024 [MB] (27 MBps) Copying: 439/1024 [MB] (27 MBps) Copying: 465/1024 [MB] (26 MBps) Copying: 492/1024 [MB] (26 MBps) Copying: 518/1024 [MB] (25 MBps) Copying: 542/1024 [MB] (24 MBps) Copying: 566/1024 [MB] (24 MBps) Copying: 591/1024 [MB] (24 MBps) Copying: 616/1024 [MB] (24 MBps) Copying: 641/1024 [MB] (25 MBps) Copying: 666/1024 [MB] (24 MBps) Copying: 691/1024 [MB] (24 MBps) Copying: 716/1024 [MB] (25 MBps) Copying: 741/1024 [MB] (25 MBps) Copying: 766/1024 [MB] (24 MBps) Copying: 791/1024 [MB] (24 MBps) Copying: 816/1024 [MB] (24 MBps) Copying: 840/1024 [MB] (24 MBps) Copying: 865/1024 [MB] (24 MBps) Copying: 890/1024 [MB] (24 MBps) Copying: 914/1024 [MB] (24 MBps) Copying: 938/1024 [MB] (24 MBps) Copying: 962/1024 [MB] (24 MBps) Copying: 986/1024 [MB] (23 MBps) Copying: 1011/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-08-11 13:00:26.488920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.918 [2024-08-11 13:00:26.488982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.918 [2024-08-11 13:00:26.489003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:34.918 [2024-08-11 13:00:26.489016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.918 [2024-08-11 13:00:26.489047] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:34.918 [2024-08-11 13:00:26.489500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.918 [2024-08-11 13:00:26.489518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.918 [2024-08-11 13:00:26.489531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:18:34.918 [2024-08-11 13:00:26.489549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.918 [2024-08-11 13:00:26.492093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.918 [2024-08-11 13:00:26.492139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.918 [2024-08-11 13:00:26.492156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:18:34.918 [2024-08-11 13:00:26.492167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.918 [2024-08-11 13:00:26.507987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.918 [2024-08-11 13:00:26.508082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.918 [2024-08-11 13:00:26.508113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.792 ms 00:18:34.918 [2024-08-11 13:00:26.508125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.177 [2024-08-11 13:00:26.514831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.177 [2024-08-11 13:00:26.514905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:35.177 [2024-08-11 13:00:26.514921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.649 ms 00:18:35.177 [2024-08-11 13:00:26.514933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.177 [2024-08-11 13:00:26.516619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.177 [2024-08-11 13:00:26.516665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:35.177 [2024-08-11 13:00:26.516680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:18:35.178 [2024-08-11 13:00:26.516691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.178 [2024-08-11 13:00:26.521309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.178 [2024-08-11 13:00:26.521372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:35.178 [2024-08-11 13:00:26.521390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:18:35.178 [2024-08-11 13:00:26.521401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.178 [2024-08-11 13:00:26.521528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.178 [2024-08-11 13:00:26.521546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:35.178 [2024-08-11 13:00:26.521559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:35.178 [2024-08-11 13:00:26.521569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.178 [2024-08-11 13:00:26.524270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.178 [2024-08-11 13:00:26.524315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:35.178 [2024-08-11 13:00:26.524331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:18:35.178 [2024-08-11 13:00:26.524342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.178 [2024-08-11 13:00:26.526409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.178 [2024-08-11 13:00:26.526451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:35.178 [2024-08-11 13:00:26.526465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:18:35.178 [2024-08-11 13:00:26.526475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.178 [2024-08-11 13:00:26.527729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.178 [2024-08-11 13:00:26.527769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:35.178 [2024-08-11 13:00:26.527784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:18:35.178 [2024-08-11 13:00:26.527794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.178 [2024-08-11 13:00:26.529029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.178 [2024-08-11 13:00:26.529202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:35.178 [2024-08-11 13:00:26.529229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:18:35.178 [2024-08-11 13:00:26.529240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.178 [2024-08-11 13:00:26.529304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:35.178 [2024-08-11 13:00:26.529329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.529994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:35.178 [2024-08-11 13:00:26.530168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:35.179 [2024-08-11 13:00:26.530538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:35.179 [2024-08-11 13:00:26.530561] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c755973-9dbf-4377-b4a2-419087894d03 00:18:35.179 [2024-08-11 13:00:26.530572] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:35.179 [2024-08-11 13:00:26.530583] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:35.179 [2024-08-11 13:00:26.530593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:35.179 [2024-08-11 13:00:26.530609] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:35.179 [2024-08-11 13:00:26.530627] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:35.179 [2024-08-11 13:00:26.530638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:35.179 [2024-08-11 13:00:26.530648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:35.179 [2024-08-11 13:00:26.530658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:35.179 [2024-08-11 13:00:26.530668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:35.179 [2024-08-11 13:00:26.530679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.179 [2024-08-11 13:00:26.530690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:35.179 [2024-08-11 13:00:26.530702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:18:35.179 [2024-08-11 13:00:26.530713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.532115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.179 [2024-08-11 13:00:26.532152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:35.179 [2024-08-11 13:00:26.532166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:18:35.179 [2024-08-11 13:00:26.532176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.532258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.179 [2024-08-11 13:00:26.532272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:35.179 [2024-08-11 13:00:26.532285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:35.179 [2024-08-11 13:00:26.532296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.537087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.537316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.179 [2024-08-11 13:00:26.537432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.537483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.537592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.537763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.179 [2024-08-11 13:00:26.537829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.537908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.538127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.538284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.179 [2024-08-11 13:00:26.538310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.538323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.538350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.538365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.179 [2024-08-11 13:00:26.538377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.538387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.547649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.547734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.179 [2024-08-11 13:00:26.547753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.547764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.554403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.554477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.179 [2024-08-11 13:00:26.554496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.554507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.554578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.554596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.179 [2024-08-11 13:00:26.554618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.554629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.554659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.554674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.179 [2024-08-11 13:00:26.554685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.554695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.554791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.554811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.179 [2024-08-11 13:00:26.554823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.554840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.554903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.554924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.179 [2024-08-11 13:00:26.554936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.554947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.555007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.555023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.179 [2024-08-11 13:00:26.555035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.555053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.555105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.179 [2024-08-11 13:00:26.555121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.179 [2024-08-11 13:00:26.555133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.179 [2024-08-11 13:00:26.555144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.179 [2024-08-11 13:00:26.555305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.345 ms, result 0 00:18:35.746 00:18:35.746 00:18:35.746 13:00:27 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:35.746 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:18:35.746 [2024-08-11 13:00:27.304915] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:18:35.747 [2024-08-11 13:00:27.305090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85935 ] 00:18:36.005 [2024-08-11 13:00:27.455578] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.005 [2024-08-11 13:00:27.498282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.005 [2024-08-11 13:00:27.589135] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:36.005 [2024-08-11 13:00:27.589234] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:36.264 [2024-08-11 13:00:27.757574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.264 [2024-08-11 13:00:27.757650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:36.264 [2024-08-11 13:00:27.757672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:36.264 [2024-08-11 13:00:27.757695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.264 [2024-08-11 13:00:27.757789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.264 [2024-08-11 13:00:27.757820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:36.264 [2024-08-11 13:00:27.757834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:36.264 [2024-08-11 13:00:27.757845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.264 [2024-08-11 13:00:27.757902] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:36.264 [2024-08-11 13:00:27.758272] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:36.264 [2024-08-11 13:00:27.758299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.264 [2024-08-11 13:00:27.758311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:36.264 [2024-08-11 13:00:27.758334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:18:36.264 [2024-08-11 13:00:27.758349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.264 [2024-08-11 13:00:27.759436] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:36.264 [2024-08-11 13:00:27.761862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.264 [2024-08-11 13:00:27.761921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:36.264 [2024-08-11 13:00:27.761951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:18:36.264 [2024-08-11 13:00:27.761963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.264 [2024-08-11 13:00:27.762036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.264 [2024-08-11 13:00:27.762056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:36.264 [2024-08-11 13:00:27.762079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:36.264 [2024-08-11 13:00:27.762090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.264 [2024-08-11 13:00:27.766686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.264 [2024-08-11 13:00:27.766758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:36.264 [2024-08-11 13:00:27.766775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.502 ms 00:18:36.264 [2024-08-11 13:00:27.766787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.264 [2024-08-11 13:00:27.766952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.264 [2024-08-11 13:00:27.766978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:36.264 [2024-08-11 13:00:27.766991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:18:36.264 [2024-08-11 13:00:27.767007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.264 [2024-08-11 13:00:27.767117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.264 [2024-08-11 13:00:27.767136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:36.264 [2024-08-11 13:00:27.767172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:36.264 [2024-08-11 13:00:27.767184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.264 [2024-08-11 13:00:27.767222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:36.265 [2024-08-11 13:00:27.768621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.265 [2024-08-11 13:00:27.768661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:36.265 [2024-08-11 13:00:27.768678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:18:36.265 [2024-08-11 13:00:27.768695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.265 [2024-08-11 13:00:27.768746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.265 [2024-08-11 13:00:27.768763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:36.265 [2024-08-11 13:00:27.768776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:36.265 [2024-08-11 13:00:27.768787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.265 [2024-08-11 13:00:27.768825] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:36.265 [2024-08-11 13:00:27.768856] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:36.265 [2024-08-11 13:00:27.768956] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:36.265 [2024-08-11 13:00:27.768999] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:36.265 [2024-08-11 13:00:27.769112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:36.265 [2024-08-11 13:00:27.769129] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:36.265 [2024-08-11 13:00:27.769153] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:36.265 [2024-08-11 13:00:27.769168] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769182] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769194] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:36.265 [2024-08-11 13:00:27.769205] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:36.265 [2024-08-11 13:00:27.769216] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:36.265 [2024-08-11 13:00:27.769231] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:36.265 [2024-08-11 13:00:27.769243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.265 [2024-08-11 13:00:27.769254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:36.265 [2024-08-11 13:00:27.769266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:18:36.265 [2024-08-11 13:00:27.769286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.265 [2024-08-11 13:00:27.769378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.265 [2024-08-11 13:00:27.769392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:36.265 [2024-08-11 13:00:27.769412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:36.265 [2024-08-11 13:00:27.769425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.265 [2024-08-11 13:00:27.769541] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:36.265 [2024-08-11 13:00:27.769566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:36.265 [2024-08-11 13:00:27.769579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:36.265 [2024-08-11 13:00:27.769625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769635] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:36.265 [2024-08-11 13:00:27.769657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:36.265 [2024-08-11 13:00:27.769678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:36.265 [2024-08-11 13:00:27.769688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:36.265 [2024-08-11 13:00:27.769699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:36.265 [2024-08-11 13:00:27.769711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:36.265 [2024-08-11 13:00:27.769725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:36.265 [2024-08-11 13:00:27.769736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:36.265 [2024-08-11 13:00:27.769759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:36.265 [2024-08-11 13:00:27.769791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769801] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:36.265 [2024-08-11 13:00:27.769822] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:36.265 [2024-08-11 13:00:27.769853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:36.265 [2024-08-11 13:00:27.769909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.265 [2024-08-11 13:00:27.769938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:36.265 [2024-08-11 13:00:27.769949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:36.265 [2024-08-11 13:00:27.769959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:36.265 [2024-08-11 13:00:27.769969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:36.265 [2024-08-11 13:00:27.769980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:36.265 [2024-08-11 13:00:27.769990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:36.265 [2024-08-11 13:00:27.770001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:36.265 [2024-08-11 13:00:27.770012] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:36.265 [2024-08-11 13:00:27.770023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.265 [2024-08-11 13:00:27.770033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:36.265 [2024-08-11 13:00:27.770044] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:36.265 [2024-08-11 13:00:27.770054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.265 [2024-08-11 13:00:27.770064] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:36.265 [2024-08-11 13:00:27.770075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:36.265 [2024-08-11 13:00:27.770086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:36.265 [2024-08-11 13:00:27.770100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.265 [2024-08-11 13:00:27.770112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:36.265 [2024-08-11 13:00:27.770123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:36.265 [2024-08-11 13:00:27.770136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:36.265 [2024-08-11 13:00:27.770147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:36.265 [2024-08-11 13:00:27.770157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:36.265 [2024-08-11 13:00:27.770168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:36.265 [2024-08-11 13:00:27.770179] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:36.265 [2024-08-11 13:00:27.770193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:36.265 [2024-08-11 13:00:27.770206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:36.265 [2024-08-11 13:00:27.770217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:36.265 [2024-08-11 13:00:27.770228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:36.265 [2024-08-11 13:00:27.770240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:36.265 [2024-08-11 13:00:27.770251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:36.265 [2024-08-11 13:00:27.770262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:36.265 [2024-08-11 13:00:27.770273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:36.265 [2024-08-11 13:00:27.770287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:36.265 [2024-08-11 13:00:27.770299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:36.265 [2024-08-11 13:00:27.770311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:36.265 [2024-08-11 13:00:27.770322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:36.265 [2024-08-11 13:00:27.770345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:36.265 [2024-08-11 13:00:27.770358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:36.265 [2024-08-11 13:00:27.770370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:36.265 [2024-08-11 13:00:27.770381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:36.265 [2024-08-11 13:00:27.770398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:36.265 [2024-08-11 13:00:27.770419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:36.266 [2024-08-11 13:00:27.770431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:36.266 [2024-08-11 13:00:27.770442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:36.266 [2024-08-11 13:00:27.770454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:36.266 [2024-08-11 13:00:27.770466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.770478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:36.266 [2024-08-11 13:00:27.770490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:18:36.266 [2024-08-11 13:00:27.770504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.786735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.786813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.266 [2024-08-11 13:00:27.786835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.165 ms 00:18:36.266 [2024-08-11 13:00:27.786859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.787007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.787039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:36.266 [2024-08-11 13:00:27.787053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:36.266 [2024-08-11 13:00:27.787064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.795069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.795136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.266 [2024-08-11 13:00:27.795155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.901 ms 00:18:36.266 [2024-08-11 13:00:27.795167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.795241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.795258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.266 [2024-08-11 13:00:27.795271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:36.266 [2024-08-11 13:00:27.795282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.795635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.795655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.266 [2024-08-11 13:00:27.795667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:18:36.266 [2024-08-11 13:00:27.795683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.795860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.795911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.266 [2024-08-11 13:00:27.795924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:18:36.266 [2024-08-11 13:00:27.795935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.800647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.800722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.266 [2024-08-11 13:00:27.800740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.668 ms 00:18:36.266 [2024-08-11 13:00:27.800762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.803218] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:36.266 [2024-08-11 13:00:27.803404] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:36.266 [2024-08-11 13:00:27.803432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.803445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:36.266 [2024-08-11 13:00:27.803459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:18:36.266 [2024-08-11 13:00:27.803476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.819795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.819907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:36.266 [2024-08-11 13:00:27.819930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.261 ms 00:18:36.266 [2024-08-11 13:00:27.819942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.822277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.822468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:36.266 [2024-08-11 13:00:27.822497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:18:36.266 [2024-08-11 13:00:27.822509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.824275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.824317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:36.266 [2024-08-11 13:00:27.824333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:18:36.266 [2024-08-11 13:00:27.824344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.824755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.824778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:36.266 [2024-08-11 13:00:27.824792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:18:36.266 [2024-08-11 13:00:27.824821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.266 [2024-08-11 13:00:27.852922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.266 [2024-08-11 13:00:27.853000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:36.266 [2024-08-11 13:00:27.853021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.075 ms 00:18:36.266 [2024-08-11 13:00:27.853033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.524 [2024-08-11 13:00:27.861621] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:36.524 [2024-08-11 13:00:27.864564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.524 [2024-08-11 13:00:27.864777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:36.524 [2024-08-11 13:00:27.864810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.449 ms 00:18:36.524 [2024-08-11 13:00:27.864823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.524 [2024-08-11 13:00:27.865024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.524 [2024-08-11 13:00:27.865056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:36.524 [2024-08-11 13:00:27.865078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:36.524 [2024-08-11 13:00:27.865098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.524 [2024-08-11 13:00:27.865206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.524 [2024-08-11 13:00:27.865226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:36.524 [2024-08-11 13:00:27.865252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:36.524 [2024-08-11 13:00:27.865263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.524 [2024-08-11 13:00:27.865307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.524 [2024-08-11 13:00:27.865332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:36.524 [2024-08-11 13:00:27.865344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:36.524 [2024-08-11 13:00:27.865360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.524 [2024-08-11 13:00:27.865405] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:36.524 [2024-08-11 13:00:27.865422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.524 [2024-08-11 13:00:27.865433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:36.524 [2024-08-11 13:00:27.865445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:36.524 [2024-08-11 13:00:27.865462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.524 [2024-08-11 13:00:27.868963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.524 [2024-08-11 13:00:27.869012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:36.524 [2024-08-11 13:00:27.869029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.470 ms 00:18:36.524 [2024-08-11 13:00:27.869050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.524 [2024-08-11 13:00:27.869133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.524 [2024-08-11 13:00:27.869152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:36.524 [2024-08-11 13:00:27.869165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:36.524 [2024-08-11 13:00:27.869175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.524 [2024-08-11 13:00:27.870372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.248 ms, result 0 00:19:16.842  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (24 MBps) Copying: 101/1024 [MB] (26 MBps) Copying: 127/1024 [MB] (25 MBps) Copying: 153/1024 [MB] (25 MBps) Copying: 178/1024 [MB] (25 MBps) Copying: 205/1024 [MB] (26 MBps) Copying: 230/1024 [MB] (25 MBps) Copying: 256/1024 [MB] (25 MBps) Copying: 282/1024 [MB] (25 MBps) Copying: 308/1024 [MB] (25 MBps) Copying: 333/1024 [MB] (25 MBps) Copying: 359/1024 [MB] (25 MBps) Copying: 384/1024 [MB] (25 MBps) Copying: 410/1024 [MB] (25 MBps) Copying: 435/1024 [MB] (25 MBps) Copying: 461/1024 [MB] (26 MBps) Copying: 486/1024 [MB] (25 MBps) Copying: 511/1024 [MB] (24 MBps) Copying: 538/1024 [MB] (26 MBps) Copying: 565/1024 [MB] (26 MBps) Copying: 590/1024 [MB] (25 MBps) Copying: 616/1024 [MB] (26 MBps) Copying: 642/1024 [MB] (25 MBps) Copying: 668/1024 [MB] (26 MBps) Copying: 694/1024 [MB] (25 MBps) Copying: 720/1024 [MB] (25 MBps) Copying: 746/1024 [MB] (26 MBps) Copying: 772/1024 [MB] (26 MBps) Copying: 797/1024 [MB] (25 MBps) Copying: 823/1024 [MB] (25 MBps) Copying: 850/1024 [MB] (26 MBps) Copying: 876/1024 [MB] (25 MBps) Copying: 902/1024 [MB] (26 MBps) Copying: 928/1024 [MB] (26 MBps) Copying: 954/1024 [MB] (26 MBps) Copying: 979/1024 [MB] (24 MBps) Copying: 1005/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-08-11 13:01:08.416687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.842 [2024-08-11 13:01:08.416796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.842 [2024-08-11 13:01:08.416829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:16.842 [2024-08-11 13:01:08.416848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.842 [2024-08-11 13:01:08.416921] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.842 [2024-08-11 13:01:08.417461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.842 [2024-08-11 13:01:08.417498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.842 [2024-08-11 13:01:08.417528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:19:16.842 [2024-08-11 13:01:08.417547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.842 [2024-08-11 13:01:08.417916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.842 [2024-08-11 13:01:08.417944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.842 [2024-08-11 13:01:08.417964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:19:16.842 [2024-08-11 13:01:08.417982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.842 [2024-08-11 13:01:08.424097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.842 [2024-08-11 13:01:08.424367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:16.842 [2024-08-11 13:01:08.424407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.081 ms 00:19:16.842 [2024-08-11 13:01:08.424446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.842 [2024-08-11 13:01:08.435192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.842 [2024-08-11 13:01:08.435282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:16.842 [2024-08-11 13:01:08.435310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.675 ms 00:19:16.842 [2024-08-11 13:01:08.435329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.842 [2024-08-11 13:01:08.436974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.842 [2024-08-11 13:01:08.437035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.842 [2024-08-11 13:01:08.437060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:19:16.842 [2024-08-11 13:01:08.437078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.101 [2024-08-11 13:01:08.440008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.101 [2024-08-11 13:01:08.440079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:17.101 [2024-08-11 13:01:08.440104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:19:17.101 [2024-08-11 13:01:08.440142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.101 [2024-08-11 13:01:08.440326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.101 [2024-08-11 13:01:08.440356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:17.101 [2024-08-11 13:01:08.440381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:17.101 [2024-08-11 13:01:08.440405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.101 [2024-08-11 13:01:08.442331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.101 [2024-08-11 13:01:08.442605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:17.101 [2024-08-11 13:01:08.442664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:19:17.101 [2024-08-11 13:01:08.442689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.101 [2024-08-11 13:01:08.444435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.101 [2024-08-11 13:01:08.444492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:17.101 [2024-08-11 13:01:08.444522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:19:17.101 [2024-08-11 13:01:08.444545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.101 [2024-08-11 13:01:08.446223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.101 [2024-08-11 13:01:08.446414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:17.101 [2024-08-11 13:01:08.446445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:19:17.101 [2024-08-11 13:01:08.446469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.101 [2024-08-11 13:01:08.447554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.101 [2024-08-11 13:01:08.447594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:17.101 [2024-08-11 13:01:08.447611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:19:17.101 [2024-08-11 13:01:08.447624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.101 [2024-08-11 13:01:08.447666] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:17.101 [2024-08-11 13:01:08.447709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:17.101 [2024-08-11 13:01:08.447727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:17.101 [2024-08-11 13:01:08.447741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:17.101 [2024-08-11 13:01:08.447755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:17.101 [2024-08-11 13:01:08.447769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.447989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.448997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.449011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.449025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.449039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.449053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.449067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.449081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:17.102 [2024-08-11 13:01:08.449095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:17.103 [2024-08-11 13:01:08.449109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:17.103 [2024-08-11 13:01:08.449123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:17.103 [2024-08-11 13:01:08.449137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:17.103 [2024-08-11 13:01:08.449151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:17.103 [2024-08-11 13:01:08.449165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:17.103 [2024-08-11 13:01:08.449179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:17.103 [2024-08-11 13:01:08.449193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:17.103 [2024-08-11 13:01:08.449218] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:17.103 [2024-08-11 13:01:08.449232] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c755973-9dbf-4377-b4a2-419087894d03 00:19:17.103 [2024-08-11 13:01:08.449262] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:17.103 [2024-08-11 13:01:08.449281] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:17.103 [2024-08-11 13:01:08.449294] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:17.103 [2024-08-11 13:01:08.449308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:17.103 [2024-08-11 13:01:08.449321] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:17.103 [2024-08-11 13:01:08.449334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:17.103 [2024-08-11 13:01:08.449347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:17.103 [2024-08-11 13:01:08.449359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:17.103 [2024-08-11 13:01:08.449371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:17.103 [2024-08-11 13:01:08.449385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.103 [2024-08-11 13:01:08.449408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:17.103 [2024-08-11 13:01:08.449423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:19:17.103 [2024-08-11 13:01:08.449447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.451499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.103 [2024-08-11 13:01:08.451667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:17.103 [2024-08-11 13:01:08.451800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:19:17.103 [2024-08-11 13:01:08.451967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.452224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.103 [2024-08-11 13:01:08.452374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:17.103 [2024-08-11 13:01:08.452495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:17.103 [2024-08-11 13:01:08.452801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.458355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.458637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.103 [2024-08-11 13:01:08.458788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.458978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.459118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.459193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.103 [2024-08-11 13:01:08.459320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.459382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.459637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.459795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.103 [2024-08-11 13:01:08.459984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.460053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.460186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.460312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.103 [2024-08-11 13:01:08.460462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.460529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.470730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.471088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.103 [2024-08-11 13:01:08.471223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.471367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.478601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.478985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.103 [2024-08-11 13:01:08.479121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.479148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.479237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.479268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.103 [2024-08-11 13:01:08.479283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.479297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.479334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.479350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.103 [2024-08-11 13:01:08.479364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.479377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.479493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.479525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.103 [2024-08-11 13:01:08.479547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.479560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.479617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.479638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:17.103 [2024-08-11 13:01:08.479653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.479666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.479732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.479750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.103 [2024-08-11 13:01:08.479770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.479785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.479860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.103 [2024-08-11 13:01:08.479923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.103 [2024-08-11 13:01:08.479938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.103 [2024-08-11 13:01:08.479963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.103 [2024-08-11 13:01:08.480150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.420 ms, result 0 00:19:17.362 00:19:17.362 00:19:17.362 13:01:08 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:19.961 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:19.961 13:01:10 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:19.961 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:19:19.961 [2024-08-11 13:01:11.072735] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:19:19.961 [2024-08-11 13:01:11.073131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86377 ] 00:19:19.961 [2024-08-11 13:01:11.223422] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.961 [2024-08-11 13:01:11.266528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.961 [2024-08-11 13:01:11.357855] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.961 [2024-08-11 13:01:11.357977] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.961 [2024-08-11 13:01:11.516415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.516491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:19.961 [2024-08-11 13:01:11.516514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:19.961 [2024-08-11 13:01:11.516539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.516631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.516654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.961 [2024-08-11 13:01:11.516668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:19.961 [2024-08-11 13:01:11.516690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.516724] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:19.961 [2024-08-11 13:01:11.517133] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:19.961 [2024-08-11 13:01:11.517162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.517175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.961 [2024-08-11 13:01:11.517188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:19:19.961 [2024-08-11 13:01:11.517204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.518505] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:19.961 [2024-08-11 13:01:11.520774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.520980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:19.961 [2024-08-11 13:01:11.521012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:19:19.961 [2024-08-11 13:01:11.521025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.521109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.521130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:19.961 [2024-08-11 13:01:11.521144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:19.961 [2024-08-11 13:01:11.521155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.525821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.525909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.961 [2024-08-11 13:01:11.525928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.544 ms 00:19:19.961 [2024-08-11 13:01:11.525940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.526099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.526120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.961 [2024-08-11 13:01:11.526150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:19.961 [2024-08-11 13:01:11.526161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.526276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.526299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:19.961 [2024-08-11 13:01:11.526316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:19.961 [2024-08-11 13:01:11.526329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.526368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:19.961 [2024-08-11 13:01:11.527800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.527851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.961 [2024-08-11 13:01:11.527893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:19:19.961 [2024-08-11 13:01:11.527907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.527964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.527993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:19.961 [2024-08-11 13:01:11.528007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:19.961 [2024-08-11 13:01:11.528029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.528059] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:19.961 [2024-08-11 13:01:11.528092] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:19.961 [2024-08-11 13:01:11.528149] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:19.961 [2024-08-11 13:01:11.528184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:19.961 [2024-08-11 13:01:11.528301] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:19.961 [2024-08-11 13:01:11.528318] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:19.961 [2024-08-11 13:01:11.528333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:19.961 [2024-08-11 13:01:11.528349] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:19.961 [2024-08-11 13:01:11.528362] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:19.961 [2024-08-11 13:01:11.528389] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:19.961 [2024-08-11 13:01:11.528401] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:19.961 [2024-08-11 13:01:11.528417] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:19.961 [2024-08-11 13:01:11.528428] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:19.961 [2024-08-11 13:01:11.528440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.528452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:19.961 [2024-08-11 13:01:11.528464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:19:19.961 [2024-08-11 13:01:11.528475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.528585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.961 [2024-08-11 13:01:11.528604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:19.961 [2024-08-11 13:01:11.528623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:19.961 [2024-08-11 13:01:11.528635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.961 [2024-08-11 13:01:11.528750] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:19.961 [2024-08-11 13:01:11.528777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:19.961 [2024-08-11 13:01:11.528805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.961 [2024-08-11 13:01:11.528817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.961 [2024-08-11 13:01:11.528829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:19.961 [2024-08-11 13:01:11.528840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:19.961 [2024-08-11 13:01:11.528851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:19.961 [2024-08-11 13:01:11.528862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:19.961 [2024-08-11 13:01:11.528891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:19.961 [2024-08-11 13:01:11.528904] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.961 [2024-08-11 13:01:11.528914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:19.961 [2024-08-11 13:01:11.528925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:19.961 [2024-08-11 13:01:11.528935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.961 [2024-08-11 13:01:11.528946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:19.961 [2024-08-11 13:01:11.528961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:19.961 [2024-08-11 13:01:11.528973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.961 [2024-08-11 13:01:11.528984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:19.961 [2024-08-11 13:01:11.528994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:19.961 [2024-08-11 13:01:11.529004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.961 [2024-08-11 13:01:11.529015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:19.961 [2024-08-11 13:01:11.529026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:19.962 [2024-08-11 13:01:11.529036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.962 [2024-08-11 13:01:11.529046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:19.962 [2024-08-11 13:01:11.529059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:19.962 [2024-08-11 13:01:11.529069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.962 [2024-08-11 13:01:11.529079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:19.962 [2024-08-11 13:01:11.529089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:19.962 [2024-08-11 13:01:11.529099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.962 [2024-08-11 13:01:11.529110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:19.962 [2024-08-11 13:01:11.529120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:19.962 [2024-08-11 13:01:11.529136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.962 [2024-08-11 13:01:11.529148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:19.962 [2024-08-11 13:01:11.529158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:19.962 [2024-08-11 13:01:11.529168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.962 [2024-08-11 13:01:11.529179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:19.962 [2024-08-11 13:01:11.529189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:19.962 [2024-08-11 13:01:11.529199] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.962 [2024-08-11 13:01:11.529209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:19.962 [2024-08-11 13:01:11.529220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:19.962 [2024-08-11 13:01:11.529230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.962 [2024-08-11 13:01:11.529240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:19.962 [2024-08-11 13:01:11.529251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:19.962 [2024-08-11 13:01:11.529260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.962 [2024-08-11 13:01:11.529270] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:19.962 [2024-08-11 13:01:11.529282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:19.962 [2024-08-11 13:01:11.529303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.962 [2024-08-11 13:01:11.529318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.962 [2024-08-11 13:01:11.529330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:19.962 [2024-08-11 13:01:11.529340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:19.962 [2024-08-11 13:01:11.529351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:19.962 [2024-08-11 13:01:11.529361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:19.962 [2024-08-11 13:01:11.529371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:19.962 [2024-08-11 13:01:11.529382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:19.962 [2024-08-11 13:01:11.529394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:19.962 [2024-08-11 13:01:11.529407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.962 [2024-08-11 13:01:11.529425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:19.962 [2024-08-11 13:01:11.529437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:19.962 [2024-08-11 13:01:11.529448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:19.962 [2024-08-11 13:01:11.529459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:19.962 [2024-08-11 13:01:11.529470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:19.962 [2024-08-11 13:01:11.529482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:19.962 [2024-08-11 13:01:11.529494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:19.962 [2024-08-11 13:01:11.529507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:19.962 [2024-08-11 13:01:11.529519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:19.962 [2024-08-11 13:01:11.529531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:19.962 [2024-08-11 13:01:11.529542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:19.962 [2024-08-11 13:01:11.529565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:19.962 [2024-08-11 13:01:11.529577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:19.962 [2024-08-11 13:01:11.529589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:19.962 [2024-08-11 13:01:11.529601] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:19.962 [2024-08-11 13:01:11.529613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.962 [2024-08-11 13:01:11.529625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:19.962 [2024-08-11 13:01:11.529637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:19.962 [2024-08-11 13:01:11.529648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:19.962 [2024-08-11 13:01:11.529659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:19.962 [2024-08-11 13:01:11.529672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.962 [2024-08-11 13:01:11.529685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:19.962 [2024-08-11 13:01:11.529698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:19:19.962 [2024-08-11 13:01:11.529713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.962 [2024-08-11 13:01:11.545688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.962 [2024-08-11 13:01:11.545769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.962 [2024-08-11 13:01:11.545791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.905 ms 00:19:19.962 [2024-08-11 13:01:11.545803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.962 [2024-08-11 13:01:11.545977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.962 [2024-08-11 13:01:11.546003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:19.962 [2024-08-11 13:01:11.546018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:19.962 [2024-08-11 13:01:11.546030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.962 [2024-08-11 13:01:11.555272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.962 [2024-08-11 13:01:11.555594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.962 [2024-08-11 13:01:11.555630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.132 ms 00:19:19.962 [2024-08-11 13:01:11.555655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.962 [2024-08-11 13:01:11.555739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.962 [2024-08-11 13:01:11.555760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.962 [2024-08-11 13:01:11.555777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:19.962 [2024-08-11 13:01:11.555805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.962 [2024-08-11 13:01:11.556257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.962 [2024-08-11 13:01:11.556289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.962 [2024-08-11 13:01:11.556321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:19.962 [2024-08-11 13:01:11.556339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.962 [2024-08-11 13:01:11.556572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.962 [2024-08-11 13:01:11.556606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.962 [2024-08-11 13:01:11.556623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:19:19.962 [2024-08-11 13:01:11.556636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.221 [2024-08-11 13:01:11.561835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.221 [2024-08-11 13:01:11.561936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.221 [2024-08-11 13:01:11.561957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:19:20.221 [2024-08-11 13:01:11.561969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.221 [2024-08-11 13:01:11.564351] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:20.221 [2024-08-11 13:01:11.564398] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.222 [2024-08-11 13:01:11.564418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.564430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.222 [2024-08-11 13:01:11.564444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.292 ms 00:19:20.222 [2024-08-11 13:01:11.564460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.580716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.580811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.222 [2024-08-11 13:01:11.580833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.199 ms 00:19:20.222 [2024-08-11 13:01:11.580845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.583337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.583390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.222 [2024-08-11 13:01:11.583408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:19:20.222 [2024-08-11 13:01:11.583419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.585254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.585298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.222 [2024-08-11 13:01:11.585314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:19:20.222 [2024-08-11 13:01:11.585326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.585737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.585766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.222 [2024-08-11 13:01:11.585794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:20.222 [2024-08-11 13:01:11.585812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.604042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.604352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.222 [2024-08-11 13:01:11.604385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.201 ms 00:19:20.222 [2024-08-11 13:01:11.604398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.613046] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:20.222 [2024-08-11 13:01:11.616017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.616061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.222 [2024-08-11 13:01:11.616080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.536 ms 00:19:20.222 [2024-08-11 13:01:11.616092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.616230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.616257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.222 [2024-08-11 13:01:11.616271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:20.222 [2024-08-11 13:01:11.616286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.616425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.616448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.222 [2024-08-11 13:01:11.616461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:20.222 [2024-08-11 13:01:11.616472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.616520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.616537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.222 [2024-08-11 13:01:11.616565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.222 [2024-08-11 13:01:11.616576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.616623] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.222 [2024-08-11 13:01:11.616642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.616654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.222 [2024-08-11 13:01:11.616666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:20.222 [2024-08-11 13:01:11.616677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.620394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.620445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.222 [2024-08-11 13:01:11.620463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:19:20.222 [2024-08-11 13:01:11.620484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.620567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.222 [2024-08-11 13:01:11.620587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.222 [2024-08-11 13:01:11.620613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:20.222 [2024-08-11 13:01:11.620625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.222 [2024-08-11 13:01:11.622019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.112 ms, result 0 00:20:02.861  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (24 MBps) Copying: 125/1024 [MB] (24 MBps) Copying: 149/1024 [MB] (23 MBps) Copying: 174/1024 [MB] (24 MBps) Copying: 198/1024 [MB] (24 MBps) Copying: 222/1024 [MB] (23 MBps) Copying: 245/1024 [MB] (23 MBps) Copying: 270/1024 [MB] (24 MBps) Copying: 295/1024 [MB] (25 MBps) Copying: 320/1024 [MB] (24 MBps) Copying: 344/1024 [MB] (24 MBps) Copying: 369/1024 [MB] (24 MBps) Copying: 394/1024 [MB] (24 MBps) Copying: 418/1024 [MB] (23 MBps) Copying: 442/1024 [MB] (24 MBps) Copying: 467/1024 [MB] (24 MBps) Copying: 491/1024 [MB] (24 MBps) Copying: 516/1024 [MB] (25 MBps) Copying: 541/1024 [MB] (24 MBps) Copying: 564/1024 [MB] (22 MBps) Copying: 587/1024 [MB] (23 MBps) Copying: 612/1024 [MB] (25 MBps) Copying: 637/1024 [MB] (25 MBps) Copying: 662/1024 [MB] (24 MBps) Copying: 687/1024 [MB] (25 MBps) Copying: 712/1024 [MB] (25 MBps) Copying: 736/1024 [MB] (24 MBps) Copying: 761/1024 [MB] (24 MBps) Copying: 785/1024 [MB] (24 MBps) Copying: 810/1024 [MB] (24 MBps) Copying: 834/1024 [MB] (24 MBps) Copying: 859/1024 [MB] (24 MBps) Copying: 883/1024 [MB] (24 MBps) Copying: 908/1024 [MB] (24 MBps) Copying: 932/1024 [MB] (24 MBps) Copying: 957/1024 [MB] (24 MBps) Copying: 982/1024 [MB] (24 MBps) Copying: 1006/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-08-11 13:01:54.450858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.861 [2024-08-11 13:01:54.451257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.861 [2024-08-11 13:01:54.451310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:02.861 [2024-08-11 13:01:54.451324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.861 [2024-08-11 13:01:54.452443] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.861 [2024-08-11 13:01:54.455015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.861 [2024-08-11 13:01:54.455067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.861 [2024-08-11 13:01:54.455086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.525 ms 00:20:02.861 [2024-08-11 13:01:54.455098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.470547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.470857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:03.121 [2024-08-11 13:01:54.470906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.568 ms 00:20:03.121 [2024-08-11 13:01:54.470937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.493239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.493569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:03.121 [2024-08-11 13:01:54.493602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.267 ms 00:20:03.121 [2024-08-11 13:01:54.493615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.500358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.500561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:03.121 [2024-08-11 13:01:54.500591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.670 ms 00:20:03.121 [2024-08-11 13:01:54.500612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.502015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.502059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:03.121 [2024-08-11 13:01:54.502075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:20:03.121 [2024-08-11 13:01:54.502088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.505006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.505069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:03.121 [2024-08-11 13:01:54.505086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:20:03.121 [2024-08-11 13:01:54.505097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.604735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.604859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:03.121 [2024-08-11 13:01:54.604902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.585 ms 00:20:03.121 [2024-08-11 13:01:54.604924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.606860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.606914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:03.121 [2024-08-11 13:01:54.606931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:20:03.121 [2024-08-11 13:01:54.606942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.608264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.608303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:03.121 [2024-08-11 13:01:54.608318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:20:03.121 [2024-08-11 13:01:54.608329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.609541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.609580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.121 [2024-08-11 13:01:54.609596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:20:03.121 [2024-08-11 13:01:54.609606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.610678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.121 [2024-08-11 13:01:54.610840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:03.121 [2024-08-11 13:01:54.610883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:20:03.121 [2024-08-11 13:01:54.610899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.121 [2024-08-11 13:01:54.610943] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:03.121 [2024-08-11 13:01:54.610982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118784 / 261120 wr_cnt: 1 state: open 00:20:03.121 [2024-08-11 13:01:54.611004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:03.121 [2024-08-11 13:01:54.611465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.611993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:03.122 [2024-08-11 13:01:54.612228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:03.122 [2024-08-11 13:01:54.612240] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c755973-9dbf-4377-b4a2-419087894d03 00:20:03.122 [2024-08-11 13:01:54.612252] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118784 00:20:03.122 [2024-08-11 13:01:54.612263] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119744 00:20:03.122 [2024-08-11 13:01:54.612286] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118784 00:20:03.122 [2024-08-11 13:01:54.612299] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:20:03.122 [2024-08-11 13:01:54.612310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:03.122 [2024-08-11 13:01:54.612330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:03.122 [2024-08-11 13:01:54.612346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:03.122 [2024-08-11 13:01:54.612356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:03.122 [2024-08-11 13:01:54.612367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:03.122 [2024-08-11 13:01:54.612379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.122 [2024-08-11 13:01:54.612392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:03.122 [2024-08-11 13:01:54.612404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:20:03.122 [2024-08-11 13:01:54.612415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.122 [2024-08-11 13:01:54.614264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.122 [2024-08-11 13:01:54.614329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:03.122 [2024-08-11 13:01:54.614522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:20:03.122 [2024-08-11 13:01:54.614583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.122 [2024-08-11 13:01:54.614745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.122 [2024-08-11 13:01:54.614807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:03.122 [2024-08-11 13:01:54.614933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:03.122 [2024-08-11 13:01:54.615057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.122 [2024-08-11 13:01:54.619957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.122 [2024-08-11 13:01:54.620225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.122 [2024-08-11 13:01:54.620353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.122 [2024-08-11 13:01:54.620482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.122 [2024-08-11 13:01:54.620606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.122 [2024-08-11 13:01:54.620683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.122 [2024-08-11 13:01:54.620798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.122 [2024-08-11 13:01:54.620850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.122 [2024-08-11 13:01:54.621055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.122 [2024-08-11 13:01:54.621201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.122 [2024-08-11 13:01:54.621325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.122 [2024-08-11 13:01:54.621378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.122 [2024-08-11 13:01:54.621528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.122 [2024-08-11 13:01:54.621646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.122 [2024-08-11 13:01:54.621778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.621906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.631540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.123 [2024-08-11 13:01:54.631858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.123 [2024-08-11 13:01:54.632017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.632188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.639009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.123 [2024-08-11 13:01:54.639311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.123 [2024-08-11 13:01:54.639449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.639557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.639678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.123 [2024-08-11 13:01:54.639733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.123 [2024-08-11 13:01:54.639969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.639999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.640079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.123 [2024-08-11 13:01:54.640100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.123 [2024-08-11 13:01:54.640113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.640124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.640232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.123 [2024-08-11 13:01:54.640254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.123 [2024-08-11 13:01:54.640268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.640279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.640332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.123 [2024-08-11 13:01:54.640357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.123 [2024-08-11 13:01:54.640371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.640382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.640459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.123 [2024-08-11 13:01:54.640483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.123 [2024-08-11 13:01:54.640495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.640507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.640579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.123 [2024-08-11 13:01:54.640601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.123 [2024-08-11 13:01:54.640614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.123 [2024-08-11 13:01:54.640625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.123 [2024-08-11 13:01:54.640811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 193.072 ms, result 0 00:20:04.058 00:20:04.058 00:20:04.058 13:01:55 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:04.058 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:04.058 [2024-08-11 13:01:55.402847] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:04.058 [2024-08-11 13:01:55.403058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86824 ] 00:20:04.058 [2024-08-11 13:01:55.554137] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.058 [2024-08-11 13:01:55.597286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.318 [2024-08-11 13:01:55.688885] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:04.318 [2024-08-11 13:01:55.688983] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:04.318 [2024-08-11 13:01:55.849197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.849285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:04.318 [2024-08-11 13:01:55.849309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:04.318 [2024-08-11 13:01:55.849322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.849420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.849450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.318 [2024-08-11 13:01:55.849465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:04.318 [2024-08-11 13:01:55.849476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.849512] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:04.318 [2024-08-11 13:01:55.849915] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:04.318 [2024-08-11 13:01:55.849944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.849968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.318 [2024-08-11 13:01:55.849981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:20:04.318 [2024-08-11 13:01:55.849993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.851231] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:04.318 [2024-08-11 13:01:55.853580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.853626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:04.318 [2024-08-11 13:01:55.853658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.355 ms 00:20:04.318 [2024-08-11 13:01:55.853679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.853763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.853793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:04.318 [2024-08-11 13:01:55.853807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:04.318 [2024-08-11 13:01:55.853819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.858489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.858804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.318 [2024-08-11 13:01:55.858850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:20:04.318 [2024-08-11 13:01:55.858864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.859061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.859085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.318 [2024-08-11 13:01:55.859116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:04.318 [2024-08-11 13:01:55.859136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.859224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.859245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:04.318 [2024-08-11 13:01:55.859258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:04.318 [2024-08-11 13:01:55.859270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.859314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:04.318 [2024-08-11 13:01:55.860749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.860790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.318 [2024-08-11 13:01:55.860813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:20:04.318 [2024-08-11 13:01:55.860825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.860881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.860902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:04.318 [2024-08-11 13:01:55.860916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:04.318 [2024-08-11 13:01:55.860927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.860960] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:04.318 [2024-08-11 13:01:55.861015] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:04.318 [2024-08-11 13:01:55.861088] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:04.318 [2024-08-11 13:01:55.861125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:04.318 [2024-08-11 13:01:55.861247] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:04.318 [2024-08-11 13:01:55.861271] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:04.318 [2024-08-11 13:01:55.861288] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:04.318 [2024-08-11 13:01:55.861304] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:04.318 [2024-08-11 13:01:55.861318] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:04.318 [2024-08-11 13:01:55.861331] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:04.318 [2024-08-11 13:01:55.861342] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:04.318 [2024-08-11 13:01:55.861358] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:04.318 [2024-08-11 13:01:55.861370] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:04.318 [2024-08-11 13:01:55.861383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.861395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:04.318 [2024-08-11 13:01:55.861408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:20:04.318 [2024-08-11 13:01:55.861419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.861521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.318 [2024-08-11 13:01:55.861540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:04.318 [2024-08-11 13:01:55.861566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:04.318 [2024-08-11 13:01:55.861578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.318 [2024-08-11 13:01:55.861709] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:04.318 [2024-08-11 13:01:55.861738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:04.318 [2024-08-11 13:01:55.861752] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.318 [2024-08-11 13:01:55.861764] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.318 [2024-08-11 13:01:55.861776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:04.318 [2024-08-11 13:01:55.861787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:04.318 [2024-08-11 13:01:55.861798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:04.318 [2024-08-11 13:01:55.861811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:04.318 [2024-08-11 13:01:55.861823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:04.318 [2024-08-11 13:01:55.861834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.318 [2024-08-11 13:01:55.861844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:04.318 [2024-08-11 13:01:55.861855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:04.318 [2024-08-11 13:01:55.862050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.318 [2024-08-11 13:01:55.862115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:04.318 [2024-08-11 13:01:55.862160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:04.318 [2024-08-11 13:01:55.862199] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.318 [2024-08-11 13:01:55.862244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:04.318 [2024-08-11 13:01:55.862365] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:04.318 [2024-08-11 13:01:55.862540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.318 [2024-08-11 13:01:55.862593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:04.318 [2024-08-11 13:01:55.862746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:04.318 [2024-08-11 13:01:55.862797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.318 [2024-08-11 13:01:55.862838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:04.319 [2024-08-11 13:01:55.862997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:04.319 [2024-08-11 13:01:55.863049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.319 [2024-08-11 13:01:55.863120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:04.319 [2024-08-11 13:01:55.863292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:04.319 [2024-08-11 13:01:55.863342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.319 [2024-08-11 13:01:55.863382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:04.319 [2024-08-11 13:01:55.863544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:04.319 [2024-08-11 13:01:55.863577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.319 [2024-08-11 13:01:55.863596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:04.319 [2024-08-11 13:01:55.863621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:04.319 [2024-08-11 13:01:55.863642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.319 [2024-08-11 13:01:55.863654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:04.319 [2024-08-11 13:01:55.863666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:04.319 [2024-08-11 13:01:55.863677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.319 [2024-08-11 13:01:55.863688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:04.319 [2024-08-11 13:01:55.863700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:04.319 [2024-08-11 13:01:55.863712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.319 [2024-08-11 13:01:55.863723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:04.319 [2024-08-11 13:01:55.863734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:04.319 [2024-08-11 13:01:55.863745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.319 [2024-08-11 13:01:55.863755] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:04.319 [2024-08-11 13:01:55.863768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:04.319 [2024-08-11 13:01:55.863780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.319 [2024-08-11 13:01:55.863791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.319 [2024-08-11 13:01:55.863803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:04.319 [2024-08-11 13:01:55.863818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:04.319 [2024-08-11 13:01:55.863845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:04.319 [2024-08-11 13:01:55.863857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:04.319 [2024-08-11 13:01:55.863881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:04.319 [2024-08-11 13:01:55.863896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:04.319 [2024-08-11 13:01:55.863909] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:04.319 [2024-08-11 13:01:55.863924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.319 [2024-08-11 13:01:55.863943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:04.319 [2024-08-11 13:01:55.863955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:04.319 [2024-08-11 13:01:55.863967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:04.319 [2024-08-11 13:01:55.863979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:04.319 [2024-08-11 13:01:55.863991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:04.319 [2024-08-11 13:01:55.864003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:04.319 [2024-08-11 13:01:55.864015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:04.319 [2024-08-11 13:01:55.864027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:04.319 [2024-08-11 13:01:55.864039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:04.319 [2024-08-11 13:01:55.864055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:04.319 [2024-08-11 13:01:55.864067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:04.319 [2024-08-11 13:01:55.864093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:04.319 [2024-08-11 13:01:55.864106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:04.319 [2024-08-11 13:01:55.864118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:04.319 [2024-08-11 13:01:55.864130] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:04.319 [2024-08-11 13:01:55.864143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.319 [2024-08-11 13:01:55.864157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:04.319 [2024-08-11 13:01:55.864170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:04.319 [2024-08-11 13:01:55.864183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:04.319 [2024-08-11 13:01:55.864194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:04.319 [2024-08-11 13:01:55.864209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.864221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:04.319 [2024-08-11 13:01:55.864235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:20:04.319 [2024-08-11 13:01:55.864246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.319 [2024-08-11 13:01:55.880986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.881328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.319 [2024-08-11 13:01:55.881480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.611 ms 00:20:04.319 [2024-08-11 13:01:55.881664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.319 [2024-08-11 13:01:55.881903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.881987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:04.319 [2024-08-11 13:01:55.882136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:20:04.319 [2024-08-11 13:01:55.882212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.319 [2024-08-11 13:01:55.891776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.892132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.319 [2024-08-11 13:01:55.892279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.286 ms 00:20:04.319 [2024-08-11 13:01:55.892398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.319 [2024-08-11 13:01:55.892513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.892584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.319 [2024-08-11 13:01:55.892696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:04.319 [2024-08-11 13:01:55.892756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.319 [2024-08-11 13:01:55.893336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.893479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.319 [2024-08-11 13:01:55.893592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:04.319 [2024-08-11 13:01:55.893699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.319 [2024-08-11 13:01:55.893939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.894084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.319 [2024-08-11 13:01:55.894213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:20:04.319 [2024-08-11 13:01:55.894267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.319 [2024-08-11 13:01:55.899132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.899383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.319 [2024-08-11 13:01:55.899505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.723 ms 00:20:04.319 [2024-08-11 13:01:55.899557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.319 [2024-08-11 13:01:55.902115] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:04.319 [2024-08-11 13:01:55.902325] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:04.319 [2024-08-11 13:01:55.902471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.319 [2024-08-11 13:01:55.902518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:04.319 [2024-08-11 13:01:55.902621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.576 ms 00:20:04.319 [2024-08-11 13:01:55.902672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.578 [2024-08-11 13:01:55.919110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.578 [2024-08-11 13:01:55.919427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:04.578 [2024-08-11 13:01:55.919552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.348 ms 00:20:04.578 [2024-08-11 13:01:55.919601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.578 [2024-08-11 13:01:55.921986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.578 [2024-08-11 13:01:55.922032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:04.578 [2024-08-11 13:01:55.922050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:20:04.578 [2024-08-11 13:01:55.922062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.578 [2024-08-11 13:01:55.923699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.578 [2024-08-11 13:01:55.923744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:04.578 [2024-08-11 13:01:55.923762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.592 ms 00:20:04.578 [2024-08-11 13:01:55.923773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.578 [2024-08-11 13:01:55.924221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.578 [2024-08-11 13:01:55.924246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:04.578 [2024-08-11 13:01:55.924261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:20:04.578 [2024-08-11 13:01:55.924286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.578 [2024-08-11 13:01:55.942544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.578 [2024-08-11 13:01:55.942879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:04.578 [2024-08-11 13:01:55.942914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.231 ms 00:20:04.578 [2024-08-11 13:01:55.942947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.578 [2024-08-11 13:01:55.951581] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:04.578 [2024-08-11 13:01:55.954555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.578 [2024-08-11 13:01:55.954598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:04.578 [2024-08-11 13:01:55.954619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.528 ms 00:20:04.578 [2024-08-11 13:01:55.954631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.578 [2024-08-11 13:01:55.954788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.578 [2024-08-11 13:01:55.954815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:04.578 [2024-08-11 13:01:55.954830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:04.578 [2024-08-11 13:01:55.954842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.578 [2024-08-11 13:01:55.956616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.578 [2024-08-11 13:01:55.956669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:04.579 [2024-08-11 13:01:55.956686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:20:04.579 [2024-08-11 13:01:55.956697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.579 [2024-08-11 13:01:55.956741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.579 [2024-08-11 13:01:55.956774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:04.579 [2024-08-11 13:01:55.956793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:04.579 [2024-08-11 13:01:55.956817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.579 [2024-08-11 13:01:55.956893] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:04.579 [2024-08-11 13:01:55.956915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.579 [2024-08-11 13:01:55.956928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:04.579 [2024-08-11 13:01:55.956941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:04.579 [2024-08-11 13:01:55.956953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.579 [2024-08-11 13:01:55.960577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.579 [2024-08-11 13:01:55.960630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:04.579 [2024-08-11 13:01:55.960650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.593 ms 00:20:04.579 [2024-08-11 13:01:55.960670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.579 [2024-08-11 13:01:55.960756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.579 [2024-08-11 13:01:55.960777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:04.579 [2024-08-11 13:01:55.960791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:04.579 [2024-08-11 13:01:55.960802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.579 [2024-08-11 13:01:55.968518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.881 ms, result 0 00:20:46.057  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (25 MBps) Copying: 74/1024 [MB] (25 MBps) Copying: 99/1024 [MB] (25 MBps) Copying: 125/1024 [MB] (25 MBps) Copying: 151/1024 [MB] (25 MBps) Copying: 177/1024 [MB] (25 MBps) Copying: 203/1024 [MB] (25 MBps) Copying: 228/1024 [MB] (25 MBps) Copying: 253/1024 [MB] (25 MBps) Copying: 279/1024 [MB] (25 MBps) Copying: 304/1024 [MB] (25 MBps) Copying: 330/1024 [MB] (25 MBps) Copying: 356/1024 [MB] (25 MBps) Copying: 381/1024 [MB] (25 MBps) Copying: 407/1024 [MB] (26 MBps) Copying: 433/1024 [MB] (25 MBps) Copying: 459/1024 [MB] (25 MBps) Copying: 485/1024 [MB] (26 MBps) Copying: 510/1024 [MB] (25 MBps) Copying: 536/1024 [MB] (26 MBps) Copying: 562/1024 [MB] (25 MBps) Copying: 587/1024 [MB] (25 MBps) Copying: 613/1024 [MB] (25 MBps) Copying: 638/1024 [MB] (25 MBps) Copying: 663/1024 [MB] (24 MBps) Copying: 689/1024 [MB] (25 MBps) Copying: 715/1024 [MB] (26 MBps) Copying: 740/1024 [MB] (25 MBps) Copying: 765/1024 [MB] (25 MBps) Copying: 792/1024 [MB] (26 MBps) Copying: 816/1024 [MB] (24 MBps) Copying: 842/1024 [MB] (25 MBps) Copying: 868/1024 [MB] (25 MBps) Copying: 893/1024 [MB] (25 MBps) Copying: 919/1024 [MB] (25 MBps) Copying: 945/1024 [MB] (25 MBps) Copying: 971/1024 [MB] (25 MBps) Copying: 996/1024 [MB] (24 MBps) Copying: 1022/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-08-11 13:02:37.216617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.216950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:46.057 [2024-08-11 13:02:37.216990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.057 [2024-08-11 13:02:37.217025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.217103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.057 [2024-08-11 13:02:37.217658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.217687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:46.057 [2024-08-11 13:02:37.217704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:20:46.057 [2024-08-11 13:02:37.217718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.218064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.218093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:46.057 [2024-08-11 13:02:37.218109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:46.057 [2024-08-11 13:02:37.218134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.224602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.224912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:46.057 [2024-08-11 13:02:37.224949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.424 ms 00:20:46.057 [2024-08-11 13:02:37.224965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.234417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.234508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:46.057 [2024-08-11 13:02:37.234557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.382 ms 00:20:46.057 [2024-08-11 13:02:37.234572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.236276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.236492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:46.057 [2024-08-11 13:02:37.236525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:20:46.057 [2024-08-11 13:02:37.236542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.239898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.239970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:46.057 [2024-08-11 13:02:37.239992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:20:46.057 [2024-08-11 13:02:37.240006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.352650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.352803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:46.057 [2024-08-11 13:02:37.352831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.579 ms 00:20:46.057 [2024-08-11 13:02:37.352855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.354793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.354845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:46.057 [2024-08-11 13:02:37.354880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:20:46.057 [2024-08-11 13:02:37.354897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.356350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.356398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:46.057 [2024-08-11 13:02:37.356418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:20:46.057 [2024-08-11 13:02:37.356432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.357759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.357809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:46.057 [2024-08-11 13:02:37.357828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:20:46.057 [2024-08-11 13:02:37.357841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.359105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.057 [2024-08-11 13:02:37.359149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:46.057 [2024-08-11 13:02:37.359167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:20:46.057 [2024-08-11 13:02:37.359180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.057 [2024-08-11 13:02:37.359224] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:46.057 [2024-08-11 13:02:37.359273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:20:46.057 [2024-08-11 13:02:37.359301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.057 [2024-08-11 13:02:37.359450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.359992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:46.058 [2024-08-11 13:02:37.360883] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:46.058 [2024-08-11 13:02:37.360901] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c755973-9dbf-4377-b4a2-419087894d03 00:20:46.058 [2024-08-11 13:02:37.360929] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:20:46.058 [2024-08-11 13:02:37.360943] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15808 00:20:46.059 [2024-08-11 13:02:37.360956] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14848 00:20:46.059 [2024-08-11 13:02:37.360970] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0647 00:20:46.059 [2024-08-11 13:02:37.360984] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:46.059 [2024-08-11 13:02:37.361008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:46.059 [2024-08-11 13:02:37.361022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:46.059 [2024-08-11 13:02:37.361035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:46.059 [2024-08-11 13:02:37.361047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:46.059 [2024-08-11 13:02:37.361063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.059 [2024-08-11 13:02:37.361086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:46.059 [2024-08-11 13:02:37.361102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:20:46.059 [2024-08-11 13:02:37.361115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.362653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.059 [2024-08-11 13:02:37.362689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:46.059 [2024-08-11 13:02:37.362706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:20:46.059 [2024-08-11 13:02:37.362729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.362827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.059 [2024-08-11 13:02:37.362860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:46.059 [2024-08-11 13:02:37.362903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:46.059 [2024-08-11 13:02:37.362919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.368267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.368336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.059 [2024-08-11 13:02:37.368358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.368379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.368466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.368487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.059 [2024-08-11 13:02:37.368503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.368517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.368628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.368653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.059 [2024-08-11 13:02:37.368669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.368683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.368719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.368738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.059 [2024-08-11 13:02:37.368752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.368766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.379044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.379128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.059 [2024-08-11 13:02:37.379151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.379190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.386574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.386655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.059 [2024-08-11 13:02:37.386678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.386692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.386784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.386809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.059 [2024-08-11 13:02:37.386826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.386839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.387236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.387338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.059 [2024-08-11 13:02:37.387397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.387445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.387589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.387629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.059 [2024-08-11 13:02:37.387660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.387675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.387773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.387804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.059 [2024-08-11 13:02:37.387820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.387852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.387945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.387969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.059 [2024-08-11 13:02:37.387985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.387998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.388060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.059 [2024-08-11 13:02:37.388088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.059 [2024-08-11 13:02:37.388104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.059 [2024-08-11 13:02:37.388118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.059 [2024-08-11 13:02:37.388333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 171.674 ms, result 0 00:20:46.059 00:20:46.059 00:20:46.059 13:02:37 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:48.590 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:48.590 13:02:39 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:48.590 13:02:39 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:48.590 13:02:39 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 85301 00:20:48.590 13:02:40 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 85301 ']' 00:20:48.590 13:02:40 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 85301 00:20:48.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (85301) - No such process 00:20:48.590 Process with pid 85301 is not found 00:20:48.590 Remove shared memory files 00:20:48.590 13:02:40 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 85301 is not found' 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:48.590 13:02:40 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:48.590 ************************************ 00:20:48.590 END TEST ftl_restore 00:20:48.590 ************************************ 00:20:48.590 00:20:48.590 real 3m12.534s 00:20:48.590 user 2m57.458s 00:20:48.590 sys 0m16.682s 00:20:48.590 13:02:40 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:48.590 13:02:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:48.590 13:02:40 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:48.590 13:02:40 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:48.590 13:02:40 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:48.590 13:02:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:48.590 ************************************ 00:20:48.590 START TEST ftl_dirty_shutdown 00:20:48.590 ************************************ 00:20:48.590 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:48.849 * Looking for test storage... 00:20:48.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:48.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:48.849 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=87335 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 87335 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 87335 ']' 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:48.850 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:48.850 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:48.850 [2024-08-11 13:02:40.323246] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:48.850 [2024-08-11 13:02:40.323559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87335 ] 00:20:49.108 [2024-08-11 13:02:40.465723] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.108 [2024-08-11 13:02:40.502994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.108 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:49.108 13:02:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:20:49.108 13:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:49.108 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:49.108 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:49.108 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:49.109 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:49.109 13:02:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:49.676 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:49.676 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:49.676 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:49.676 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:20:49.676 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:49.676 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:49.676 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:49.676 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:49.934 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:49.934 { 00:20:49.934 "name": "nvme0n1", 00:20:49.934 "aliases": [ 00:20:49.934 "a5e5696b-2356-4dce-90b4-74a655c9b227" 00:20:49.934 ], 00:20:49.934 "product_name": "NVMe disk", 00:20:49.934 "block_size": 4096, 00:20:49.934 "num_blocks": 1310720, 00:20:49.934 "uuid": "a5e5696b-2356-4dce-90b4-74a655c9b227", 00:20:49.934 "assigned_rate_limits": { 00:20:49.934 "rw_ios_per_sec": 0, 00:20:49.934 "rw_mbytes_per_sec": 0, 00:20:49.934 "r_mbytes_per_sec": 0, 00:20:49.934 "w_mbytes_per_sec": 0 00:20:49.934 }, 00:20:49.934 "claimed": true, 00:20:49.935 "claim_type": "read_many_write_one", 00:20:49.935 "zoned": false, 00:20:49.935 "supported_io_types": { 00:20:49.935 "read": true, 00:20:49.935 "write": true, 00:20:49.935 "unmap": true, 00:20:49.935 "flush": true, 00:20:49.935 "reset": true, 00:20:49.935 "nvme_admin": true, 00:20:49.935 "nvme_io": true, 00:20:49.935 "nvme_io_md": false, 00:20:49.935 "write_zeroes": true, 00:20:49.935 "zcopy": false, 00:20:49.935 "get_zone_info": false, 00:20:49.935 "zone_management": false, 00:20:49.935 "zone_append": false, 00:20:49.935 "compare": true, 00:20:49.935 "compare_and_write": false, 00:20:49.935 "abort": true, 00:20:49.935 "seek_hole": false, 00:20:49.935 "seek_data": false, 00:20:49.935 "copy": true, 00:20:49.935 "nvme_iov_md": false 00:20:49.935 }, 00:20:49.935 "driver_specific": { 00:20:49.935 "nvme": [ 00:20:49.935 { 00:20:49.935 "pci_address": "0000:00:11.0", 00:20:49.935 "trid": { 00:20:49.935 "trtype": "PCIe", 00:20:49.935 "traddr": "0000:00:11.0" 00:20:49.935 }, 00:20:49.935 "ctrlr_data": { 00:20:49.935 "cntlid": 0, 00:20:49.935 "vendor_id": "0x1b36", 00:20:49.935 "model_number": "QEMU NVMe Ctrl", 00:20:49.935 "serial_number": "12341", 00:20:49.935 "firmware_revision": "8.0.0", 00:20:49.935 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:49.935 "oacs": { 00:20:49.935 "security": 0, 00:20:49.935 "format": 1, 00:20:49.935 "firmware": 0, 00:20:49.935 "ns_manage": 1 00:20:49.935 }, 00:20:49.935 "multi_ctrlr": false, 00:20:49.935 "ana_reporting": false 00:20:49.935 }, 00:20:49.935 "vs": { 00:20:49.935 "nvme_version": "1.4" 00:20:49.935 }, 00:20:49.935 "ns_data": { 00:20:49.935 "id": 1, 00:20:49.935 "can_share": false 00:20:49.935 } 00:20:49.935 } 00:20:49.935 ], 00:20:49.935 "mp_policy": "active_passive" 00:20:49.935 } 00:20:49.935 } 00:20:49.935 ]' 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:49.935 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:50.194 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=f969fa8d-39ee-430b-8cd5-9c56e25d644b 00:20:50.194 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:50.194 13:02:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f969fa8d-39ee-430b-8cd5-9c56e25d644b 00:20:50.452 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:51.020 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ff63f0b7-59ec-4b52-8ace-6696b70b82cb 00:20:51.020 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ff63f0b7-59ec-4b52-8ace-6696b70b82cb 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:51.279 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:51.537 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:51.537 { 00:20:51.537 "name": "edb562b4-2f12-4c11-af30-8a62b2e0db7e", 00:20:51.537 "aliases": [ 00:20:51.537 "lvs/nvme0n1p0" 00:20:51.537 ], 00:20:51.537 "product_name": "Logical Volume", 00:20:51.537 "block_size": 4096, 00:20:51.537 "num_blocks": 26476544, 00:20:51.537 "uuid": "edb562b4-2f12-4c11-af30-8a62b2e0db7e", 00:20:51.537 "assigned_rate_limits": { 00:20:51.537 "rw_ios_per_sec": 0, 00:20:51.537 "rw_mbytes_per_sec": 0, 00:20:51.537 "r_mbytes_per_sec": 0, 00:20:51.537 "w_mbytes_per_sec": 0 00:20:51.537 }, 00:20:51.537 "claimed": false, 00:20:51.537 "zoned": false, 00:20:51.537 "supported_io_types": { 00:20:51.537 "read": true, 00:20:51.537 "write": true, 00:20:51.537 "unmap": true, 00:20:51.537 "flush": false, 00:20:51.537 "reset": true, 00:20:51.537 "nvme_admin": false, 00:20:51.537 "nvme_io": false, 00:20:51.537 "nvme_io_md": false, 00:20:51.537 "write_zeroes": true, 00:20:51.537 "zcopy": false, 00:20:51.537 "get_zone_info": false, 00:20:51.537 "zone_management": false, 00:20:51.537 "zone_append": false, 00:20:51.537 "compare": false, 00:20:51.537 "compare_and_write": false, 00:20:51.537 "abort": false, 00:20:51.537 "seek_hole": true, 00:20:51.537 "seek_data": true, 00:20:51.537 "copy": false, 00:20:51.537 "nvme_iov_md": false 00:20:51.538 }, 00:20:51.538 "driver_specific": { 00:20:51.538 "lvol": { 00:20:51.538 "lvol_store_uuid": "ff63f0b7-59ec-4b52-8ace-6696b70b82cb", 00:20:51.538 "base_bdev": "nvme0n1", 00:20:51.538 "thin_provision": true, 00:20:51.538 "num_allocated_clusters": 0, 00:20:51.538 "snapshot": false, 00:20:51.538 "clone": false, 00:20:51.538 "esnap_clone": false 00:20:51.538 } 00:20:51.538 } 00:20:51.538 } 00:20:51.538 ]' 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:51.538 13:02:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:51.802 13:02:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:51.802 13:02:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:51.802 13:02:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:51.802 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:51.802 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:51.802 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:51.802 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:51.802 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:52.071 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:52.071 { 00:20:52.071 "name": "edb562b4-2f12-4c11-af30-8a62b2e0db7e", 00:20:52.071 "aliases": [ 00:20:52.071 "lvs/nvme0n1p0" 00:20:52.071 ], 00:20:52.071 "product_name": "Logical Volume", 00:20:52.071 "block_size": 4096, 00:20:52.071 "num_blocks": 26476544, 00:20:52.071 "uuid": "edb562b4-2f12-4c11-af30-8a62b2e0db7e", 00:20:52.071 "assigned_rate_limits": { 00:20:52.071 "rw_ios_per_sec": 0, 00:20:52.071 "rw_mbytes_per_sec": 0, 00:20:52.071 "r_mbytes_per_sec": 0, 00:20:52.071 "w_mbytes_per_sec": 0 00:20:52.071 }, 00:20:52.071 "claimed": false, 00:20:52.071 "zoned": false, 00:20:52.071 "supported_io_types": { 00:20:52.071 "read": true, 00:20:52.071 "write": true, 00:20:52.071 "unmap": true, 00:20:52.071 "flush": false, 00:20:52.071 "reset": true, 00:20:52.071 "nvme_admin": false, 00:20:52.071 "nvme_io": false, 00:20:52.071 "nvme_io_md": false, 00:20:52.071 "write_zeroes": true, 00:20:52.071 "zcopy": false, 00:20:52.071 "get_zone_info": false, 00:20:52.072 "zone_management": false, 00:20:52.072 "zone_append": false, 00:20:52.072 "compare": false, 00:20:52.072 "compare_and_write": false, 00:20:52.072 "abort": false, 00:20:52.072 "seek_hole": true, 00:20:52.072 "seek_data": true, 00:20:52.072 "copy": false, 00:20:52.072 "nvme_iov_md": false 00:20:52.072 }, 00:20:52.072 "driver_specific": { 00:20:52.072 "lvol": { 00:20:52.072 "lvol_store_uuid": "ff63f0b7-59ec-4b52-8ace-6696b70b82cb", 00:20:52.072 "base_bdev": "nvme0n1", 00:20:52.072 "thin_provision": true, 00:20:52.072 "num_allocated_clusters": 0, 00:20:52.072 "snapshot": false, 00:20:52.072 "clone": false, 00:20:52.072 "esnap_clone": false 00:20:52.072 } 00:20:52.072 } 00:20:52.072 } 00:20:52.072 ]' 00:20:52.072 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:52.072 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:52.072 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:52.342 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:52.342 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:52.342 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:52.342 13:02:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:52.342 13:02:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:52.601 13:02:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:52.601 13:02:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:52.601 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:52.601 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:52.601 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:52.601 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:52.601 13:02:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b edb562b4-2f12-4c11-af30-8a62b2e0db7e 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:52.860 { 00:20:52.860 "name": "edb562b4-2f12-4c11-af30-8a62b2e0db7e", 00:20:52.860 "aliases": [ 00:20:52.860 "lvs/nvme0n1p0" 00:20:52.860 ], 00:20:52.860 "product_name": "Logical Volume", 00:20:52.860 "block_size": 4096, 00:20:52.860 "num_blocks": 26476544, 00:20:52.860 "uuid": "edb562b4-2f12-4c11-af30-8a62b2e0db7e", 00:20:52.860 "assigned_rate_limits": { 00:20:52.860 "rw_ios_per_sec": 0, 00:20:52.860 "rw_mbytes_per_sec": 0, 00:20:52.860 "r_mbytes_per_sec": 0, 00:20:52.860 "w_mbytes_per_sec": 0 00:20:52.860 }, 00:20:52.860 "claimed": false, 00:20:52.860 "zoned": false, 00:20:52.860 "supported_io_types": { 00:20:52.860 "read": true, 00:20:52.860 "write": true, 00:20:52.860 "unmap": true, 00:20:52.860 "flush": false, 00:20:52.860 "reset": true, 00:20:52.860 "nvme_admin": false, 00:20:52.860 "nvme_io": false, 00:20:52.860 "nvme_io_md": false, 00:20:52.860 "write_zeroes": true, 00:20:52.860 "zcopy": false, 00:20:52.860 "get_zone_info": false, 00:20:52.860 "zone_management": false, 00:20:52.860 "zone_append": false, 00:20:52.860 "compare": false, 00:20:52.860 "compare_and_write": false, 00:20:52.860 "abort": false, 00:20:52.860 "seek_hole": true, 00:20:52.860 "seek_data": true, 00:20:52.860 "copy": false, 00:20:52.860 "nvme_iov_md": false 00:20:52.860 }, 00:20:52.860 "driver_specific": { 00:20:52.860 "lvol": { 00:20:52.860 "lvol_store_uuid": "ff63f0b7-59ec-4b52-8ace-6696b70b82cb", 00:20:52.860 "base_bdev": "nvme0n1", 00:20:52.860 "thin_provision": true, 00:20:52.860 "num_allocated_clusters": 0, 00:20:52.860 "snapshot": false, 00:20:52.860 "clone": false, 00:20:52.860 "esnap_clone": false 00:20:52.860 } 00:20:52.860 } 00:20:52.860 } 00:20:52.860 ]' 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d edb562b4-2f12-4c11-af30-8a62b2e0db7e --l2p_dram_limit 10' 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:52.860 13:02:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d edb562b4-2f12-4c11-af30-8a62b2e0db7e --l2p_dram_limit 10 -c nvc0n1p0 00:20:53.119 [2024-08-11 13:02:44.588453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.588727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:53.119 [2024-08-11 13:02:44.588768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:53.119 [2024-08-11 13:02:44.588782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.588926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.588949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:53.119 [2024-08-11 13:02:44.588965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:53.119 [2024-08-11 13:02:44.588977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.589016] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:53.119 [2024-08-11 13:02:44.589355] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:53.119 [2024-08-11 13:02:44.589384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.589397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:53.119 [2024-08-11 13:02:44.589412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:20:53.119 [2024-08-11 13:02:44.589423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.589567] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 18da6a89-0369-4fbb-b156-181fe149ceb6 00:20:53.119 [2024-08-11 13:02:44.590613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.590659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:53.119 [2024-08-11 13:02:44.590676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:53.119 [2024-08-11 13:02:44.590690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.595379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.595696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:53.119 [2024-08-11 13:02:44.595741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.602 ms 00:20:53.119 [2024-08-11 13:02:44.595761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.595912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.595939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:53.119 [2024-08-11 13:02:44.595954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:53.119 [2024-08-11 13:02:44.595969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.596059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.596082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:53.119 [2024-08-11 13:02:44.596104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:53.119 [2024-08-11 13:02:44.596121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.596156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:53.119 [2024-08-11 13:02:44.597715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.597754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:53.119 [2024-08-11 13:02:44.597775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:20:53.119 [2024-08-11 13:02:44.597786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.597834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.597850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:53.119 [2024-08-11 13:02:44.597891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:53.119 [2024-08-11 13:02:44.597907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.597938] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:53.119 [2024-08-11 13:02:44.598110] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:53.119 [2024-08-11 13:02:44.598134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:53.119 [2024-08-11 13:02:44.598150] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:53.119 [2024-08-11 13:02:44.598173] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:53.119 [2024-08-11 13:02:44.598188] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:53.119 [2024-08-11 13:02:44.598202] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:53.119 [2024-08-11 13:02:44.598223] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:53.119 [2024-08-11 13:02:44.598237] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:53.119 [2024-08-11 13:02:44.598249] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:53.119 [2024-08-11 13:02:44.598264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.598276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:53.119 [2024-08-11 13:02:44.598300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:20:53.119 [2024-08-11 13:02:44.598323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.598421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.119 [2024-08-11 13:02:44.598438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:53.119 [2024-08-11 13:02:44.598455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:53.119 [2024-08-11 13:02:44.598467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.119 [2024-08-11 13:02:44.598583] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:53.119 [2024-08-11 13:02:44.598602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:53.119 [2024-08-11 13:02:44.598617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.119 [2024-08-11 13:02:44.598635] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.119 [2024-08-11 13:02:44.598650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:53.119 [2024-08-11 13:02:44.598661] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:53.119 [2024-08-11 13:02:44.598674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:53.120 [2024-08-11 13:02:44.598685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:53.120 [2024-08-11 13:02:44.598698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:53.120 [2024-08-11 13:02:44.598708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.120 [2024-08-11 13:02:44.598721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:53.120 [2024-08-11 13:02:44.598733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:53.120 [2024-08-11 13:02:44.598745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.120 [2024-08-11 13:02:44.598756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:53.120 [2024-08-11 13:02:44.598771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:53.120 [2024-08-11 13:02:44.598782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.120 [2024-08-11 13:02:44.598794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:53.120 [2024-08-11 13:02:44.598805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:53.120 [2024-08-11 13:02:44.598817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.120 [2024-08-11 13:02:44.598828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:53.120 [2024-08-11 13:02:44.598841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:53.120 [2024-08-11 13:02:44.598852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.120 [2024-08-11 13:02:44.598864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:53.120 [2024-08-11 13:02:44.599113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:53.120 [2024-08-11 13:02:44.599164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.120 [2024-08-11 13:02:44.599205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:53.120 [2024-08-11 13:02:44.599246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:53.120 [2024-08-11 13:02:44.599283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.120 [2024-08-11 13:02:44.599436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:53.120 [2024-08-11 13:02:44.599603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:53.120 [2024-08-11 13:02:44.599666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.120 [2024-08-11 13:02:44.599708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:53.120 [2024-08-11 13:02:44.599828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:53.120 [2024-08-11 13:02:44.599921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.120 [2024-08-11 13:02:44.599968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:53.120 [2024-08-11 13:02:44.600096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:53.120 [2024-08-11 13:02:44.600139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.120 [2024-08-11 13:02:44.600248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:53.120 [2024-08-11 13:02:44.600303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:53.120 [2024-08-11 13:02:44.600415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.120 [2024-08-11 13:02:44.600470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:53.120 [2024-08-11 13:02:44.600574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:53.120 [2024-08-11 13:02:44.600713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.120 [2024-08-11 13:02:44.600736] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:53.120 [2024-08-11 13:02:44.600752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:53.120 [2024-08-11 13:02:44.600767] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.120 [2024-08-11 13:02:44.600783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.120 [2024-08-11 13:02:44.600794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:53.120 [2024-08-11 13:02:44.600820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:53.120 [2024-08-11 13:02:44.600831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:53.120 [2024-08-11 13:02:44.600844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:53.120 [2024-08-11 13:02:44.600855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:53.120 [2024-08-11 13:02:44.600883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:53.120 [2024-08-11 13:02:44.600906] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:53.120 [2024-08-11 13:02:44.600924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.120 [2024-08-11 13:02:44.600946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:53.120 [2024-08-11 13:02:44.600961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:53.120 [2024-08-11 13:02:44.600973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:53.120 [2024-08-11 13:02:44.600987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:53.120 [2024-08-11 13:02:44.600999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:53.120 [2024-08-11 13:02:44.601013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:53.120 [2024-08-11 13:02:44.601024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:53.120 [2024-08-11 13:02:44.601041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:53.120 [2024-08-11 13:02:44.601053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:53.120 [2024-08-11 13:02:44.601068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:53.120 [2024-08-11 13:02:44.601079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:53.120 [2024-08-11 13:02:44.601092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:53.120 [2024-08-11 13:02:44.601104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:53.120 [2024-08-11 13:02:44.601118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:53.120 [2024-08-11 13:02:44.601130] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:53.120 [2024-08-11 13:02:44.601145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.120 [2024-08-11 13:02:44.601159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:53.120 [2024-08-11 13:02:44.601173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:53.120 [2024-08-11 13:02:44.601185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:53.120 [2024-08-11 13:02:44.601201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:53.120 [2024-08-11 13:02:44.601215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.120 [2024-08-11 13:02:44.601230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:53.120 [2024-08-11 13:02:44.601243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:20:53.120 [2024-08-11 13:02:44.601258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.120 [2024-08-11 13:02:44.601413] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:53.120 [2024-08-11 13:02:44.601438] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:56.406 [2024-08-11 13:02:47.468012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.468299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:56.406 [2024-08-11 13:02:47.468448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2866.615 ms 00:20:56.406 [2024-08-11 13:02:47.468587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.476670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.477011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.406 [2024-08-11 13:02:47.477154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.934 ms 00:20:56.406 [2024-08-11 13:02:47.477212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.477460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.477542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:56.406 [2024-08-11 13:02:47.477678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:56.406 [2024-08-11 13:02:47.477829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.486296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.486526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.406 [2024-08-11 13:02:47.486654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.329 ms 00:20:56.406 [2024-08-11 13:02:47.486824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.486966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.487029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.406 [2024-08-11 13:02:47.487164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:56.406 [2024-08-11 13:02:47.487224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.487757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.487930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.406 [2024-08-11 13:02:47.488047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:56.406 [2024-08-11 13:02:47.488179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.488392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.488483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.406 [2024-08-11 13:02:47.488637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:56.406 [2024-08-11 13:02:47.488696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.494507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.494781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.406 [2024-08-11 13:02:47.494966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.690 ms 00:20:56.406 [2024-08-11 13:02:47.495093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.504451] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:56.406 [2024-08-11 13:02:47.507427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.507601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:56.406 [2024-08-11 13:02:47.507637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.166 ms 00:20:56.406 [2024-08-11 13:02:47.507654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.606602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.606674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:56.406 [2024-08-11 13:02:47.606712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.879 ms 00:20:56.406 [2024-08-11 13:02:47.606726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.606997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.607022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:56.406 [2024-08-11 13:02:47.607039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:20:56.406 [2024-08-11 13:02:47.607059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.610647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.610835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:56.406 [2024-08-11 13:02:47.610887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.544 ms 00:20:56.406 [2024-08-11 13:02:47.610905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.613968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.614013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:56.406 [2024-08-11 13:02:47.614035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:20:56.406 [2024-08-11 13:02:47.614047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.614385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.614408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:56.406 [2024-08-11 13:02:47.614425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:20:56.406 [2024-08-11 13:02:47.614437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.650091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.650165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:56.406 [2024-08-11 13:02:47.650190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.587 ms 00:20:56.406 [2024-08-11 13:02:47.650203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.654603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.654663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:56.406 [2024-08-11 13:02:47.654686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.338 ms 00:20:56.406 [2024-08-11 13:02:47.654698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.658498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.658551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:56.406 [2024-08-11 13:02:47.658573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.735 ms 00:20:56.406 [2024-08-11 13:02:47.658596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.662560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.662608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:56.406 [2024-08-11 13:02:47.662630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.908 ms 00:20:56.406 [2024-08-11 13:02:47.662643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.662704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.662723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:56.406 [2024-08-11 13:02:47.662738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:56.406 [2024-08-11 13:02:47.662750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.662864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.406 [2024-08-11 13:02:47.662906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:56.406 [2024-08-11 13:02:47.662922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:56.406 [2024-08-11 13:02:47.662934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.406 [2024-08-11 13:02:47.664023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3075.095 ms, result 0 00:20:56.406 { 00:20:56.406 "name": "ftl0", 00:20:56.406 "uuid": "18da6a89-0369-4fbb-b156-181fe149ceb6" 00:20:56.406 } 00:20:56.406 13:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:56.406 13:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:56.406 13:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:56.406 13:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:56.406 13:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:56.974 /dev/nbd0 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:56.974 1+0 records in 00:20:56.974 1+0 records out 00:20:56.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442016 s, 9.3 MB/s 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:20:56.974 13:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:56.974 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:56.974 [2024-08-11 13:02:48.423761] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:56.974 [2024-08-11 13:02:48.423996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87464 ] 00:20:57.232 [2024-08-11 13:02:48.574843] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.232 [2024-08-11 13:02:48.618226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.547  Copying: 161/1024 [MB] (161 MBps) Copying: 326/1024 [MB] (165 MBps) Copying: 492/1024 [MB] (165 MBps) Copying: 658/1024 [MB] (165 MBps) Copying: 823/1024 [MB] (165 MBps) Copying: 984/1024 [MB] (161 MBps) Copying: 1024/1024 [MB] (average 163 MBps) 00:21:03.547 00:21:03.547 13:02:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:06.078 13:02:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:06.078 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:21:06.078 [2024-08-11 13:02:57.448727] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:21:06.078 [2024-08-11 13:02:57.448923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87561 ] 00:21:06.078 [2024-08-11 13:02:57.597667] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.078 [2024-08-11 13:02:57.641516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.983  Copying: 15/1024 [MB] (15 MBps) Copying: 29/1024 [MB] (14 MBps) Copying: 45/1024 [MB] (15 MBps) Copying: 61/1024 [MB] (16 MBps) Copying: 78/1024 [MB] (16 MBps) Copying: 95/1024 [MB] (16 MBps) Copying: 111/1024 [MB] (16 MBps) Copying: 127/1024 [MB] (16 MBps) Copying: 143/1024 [MB] (16 MBps) Copying: 159/1024 [MB] (16 MBps) Copying: 176/1024 [MB] (16 MBps) Copying: 192/1024 [MB] (15 MBps) Copying: 208/1024 [MB] (16 MBps) Copying: 225/1024 [MB] (16 MBps) Copying: 242/1024 [MB] (16 MBps) Copying: 258/1024 [MB] (16 MBps) Copying: 275/1024 [MB] (16 MBps) Copying: 292/1024 [MB] (16 MBps) Copying: 308/1024 [MB] (16 MBps) Copying: 325/1024 [MB] (16 MBps) Copying: 341/1024 [MB] (16 MBps) Copying: 358/1024 [MB] (16 MBps) Copying: 374/1024 [MB] (16 MBps) Copying: 392/1024 [MB] (17 MBps) Copying: 408/1024 [MB] (16 MBps) Copying: 424/1024 [MB] (15 MBps) Copying: 440/1024 [MB] (15 MBps) Copying: 456/1024 [MB] (15 MBps) Copying: 472/1024 [MB] (16 MBps) Copying: 488/1024 [MB] (15 MBps) Copying: 504/1024 [MB] (16 MBps) Copying: 520/1024 [MB] (15 MBps) Copying: 536/1024 [MB] (15 MBps) Copying: 551/1024 [MB] (15 MBps) Copying: 568/1024 [MB] (16 MBps) Copying: 584/1024 [MB] (16 MBps) Copying: 600/1024 [MB] (16 MBps) Copying: 616/1024 [MB] (16 MBps) Copying: 633/1024 [MB] (16 MBps) Copying: 649/1024 [MB] (16 MBps) Copying: 666/1024 [MB] (16 MBps) Copying: 682/1024 [MB] (16 MBps) Copying: 698/1024 [MB] (16 MBps) Copying: 714/1024 [MB] (15 MBps) Copying: 730/1024 [MB] (16 MBps) Copying: 746/1024 [MB] (15 MBps) Copying: 760/1024 [MB] (14 MBps) Copying: 776/1024 [MB] (15 MBps) Copying: 792/1024 [MB] (15 MBps) Copying: 808/1024 [MB] (15 MBps) Copying: 824/1024 [MB] (16 MBps) Copying: 840/1024 [MB] (16 MBps) Copying: 856/1024 [MB] (15 MBps) Copying: 872/1024 [MB] (15 MBps) Copying: 888/1024 [MB] (15 MBps) Copying: 904/1024 [MB] (15 MBps) Copying: 919/1024 [MB] (15 MBps) Copying: 935/1024 [MB] (16 MBps) Copying: 951/1024 [MB] (15 MBps) Copying: 967/1024 [MB] (15 MBps) Copying: 983/1024 [MB] (16 MBps) Copying: 999/1024 [MB] (16 MBps) Copying: 1015/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:22:09.983 00:22:09.983 13:04:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:09.983 13:04:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:10.241 13:04:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:10.500 [2024-08-11 13:04:01.988911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:01.988986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:10.500 [2024-08-11 13:04:01.989011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:10.500 [2024-08-11 13:04:01.989026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:01.989063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.500 [2024-08-11 13:04:01.989544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:01.989564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:10.500 [2024-08-11 13:04:01.989583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:22:10.500 [2024-08-11 13:04:01.989605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:01.991544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:01.991742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:10.500 [2024-08-11 13:04:01.991780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.901 ms 00:22:10.500 [2024-08-11 13:04:01.991801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.009477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.009563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:10.500 [2024-08-11 13:04:02.009589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.623 ms 00:22:10.500 [2024-08-11 13:04:02.009602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.016436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.016508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:10.500 [2024-08-11 13:04:02.016529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.759 ms 00:22:10.500 [2024-08-11 13:04:02.016559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.018254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.018430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:10.500 [2024-08-11 13:04:02.018583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:22:10.500 [2024-08-11 13:04:02.018641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.022902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.022965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:10.500 [2024-08-11 13:04:02.022987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.191 ms 00:22:10.500 [2024-08-11 13:04:02.023001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.023155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.023176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:10.500 [2024-08-11 13:04:02.023227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:10.500 [2024-08-11 13:04:02.023246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.025129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.025174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:10.500 [2024-08-11 13:04:02.025194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:22:10.500 [2024-08-11 13:04:02.025206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.026751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.026794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:10.500 [2024-08-11 13:04:02.026816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:22:10.500 [2024-08-11 13:04:02.026828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.028086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.028268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:10.500 [2024-08-11 13:04:02.028302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:22:10.500 [2024-08-11 13:04:02.028316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.029563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.500 [2024-08-11 13:04:02.029609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:10.500 [2024-08-11 13:04:02.029628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:22:10.500 [2024-08-11 13:04:02.029640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.500 [2024-08-11 13:04:02.029688] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:10.500 [2024-08-11 13:04:02.029713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.029989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:10.500 [2024-08-11 13:04:02.030004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.030996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:10.501 [2024-08-11 13:04:02.031294] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:10.501 [2024-08-11 13:04:02.031325] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18da6a89-0369-4fbb-b156-181fe149ceb6 00:22:10.501 [2024-08-11 13:04:02.031349] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:10.501 [2024-08-11 13:04:02.031365] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:10.501 [2024-08-11 13:04:02.031388] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:10.501 [2024-08-11 13:04:02.031403] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:10.501 [2024-08-11 13:04:02.031415] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:10.501 [2024-08-11 13:04:02.031435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:10.501 [2024-08-11 13:04:02.031448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:10.501 [2024-08-11 13:04:02.031460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:10.501 [2024-08-11 13:04:02.031471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:10.501 [2024-08-11 13:04:02.031485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.501 [2024-08-11 13:04:02.031498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:10.501 [2024-08-11 13:04:02.031513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:22:10.501 [2024-08-11 13:04:02.031524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.033145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.501 [2024-08-11 13:04:02.033179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:10.501 [2024-08-11 13:04:02.033200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:22:10.501 [2024-08-11 13:04:02.033215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.033318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.501 [2024-08-11 13:04:02.033337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:10.501 [2024-08-11 13:04:02.033353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:10.501 [2024-08-11 13:04:02.033364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.039105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.039412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.501 [2024-08-11 13:04:02.039458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.039472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.039565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.039582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.501 [2024-08-11 13:04:02.039597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.039609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.039726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.039747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.501 [2024-08-11 13:04:02.039766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.039781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.039812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.039838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.501 [2024-08-11 13:04:02.039891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.039909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.049754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.049836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.501 [2024-08-11 13:04:02.049860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.049902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.056679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.056757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.501 [2024-08-11 13:04:02.056779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.056792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.057255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.057301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.501 [2024-08-11 13:04:02.057323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.057336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.057416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.057441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.501 [2024-08-11 13:04:02.057457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.057469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.057584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.057604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.501 [2024-08-11 13:04:02.057619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.057631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.057717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.057742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:10.501 [2024-08-11 13:04:02.057758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.057769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.057824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.057841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.501 [2024-08-11 13:04:02.057858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.057894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.057963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.501 [2024-08-11 13:04:02.057994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.501 [2024-08-11 13:04:02.058011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.501 [2024-08-11 13:04:02.058031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.501 [2024-08-11 13:04:02.058212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.284 ms, result 0 00:22:10.501 true 00:22:10.501 13:04:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 87335 00:22:10.501 13:04:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid87335 00:22:10.501 13:04:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:10.759 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:22:10.759 [2024-08-11 13:04:02.178865] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:22:10.759 [2024-08-11 13:04:02.179071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88214 ] 00:22:10.759 [2024-08-11 13:04:02.328563] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.018 [2024-08-11 13:04:02.367247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.362  Copying: 162/1024 [MB] (162 MBps) Copying: 327/1024 [MB] (164 MBps) Copying: 491/1024 [MB] (164 MBps) Copying: 656/1024 [MB] (164 MBps) Copying: 819/1024 [MB] (163 MBps) Copying: 982/1024 [MB] (163 MBps) Copying: 1024/1024 [MB] (average 163 MBps) 00:22:17.362 00:22:17.362 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 87335 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:17.362 13:04:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:17.621 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:22:17.621 [2024-08-11 13:04:08.990403] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:22:17.621 [2024-08-11 13:04:08.990577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88283 ] 00:22:17.621 [2024-08-11 13:04:09.139956] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.622 [2024-08-11 13:04:09.178759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.881 [2024-08-11 13:04:09.264985] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:17.881 [2024-08-11 13:04:09.265078] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:17.881 [2024-08-11 13:04:09.330321] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:17.881 [2024-08-11 13:04:09.330903] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:17.881 [2024-08-11 13:04:09.331129] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:18.142 [2024-08-11 13:04:09.585195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.585543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:18.142 [2024-08-11 13:04:09.585587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:18.142 [2024-08-11 13:04:09.585601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.585706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.585738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.142 [2024-08-11 13:04:09.585752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:18.142 [2024-08-11 13:04:09.585764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.585800] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:18.142 [2024-08-11 13:04:09.586181] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:18.142 [2024-08-11 13:04:09.586214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.586228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.142 [2024-08-11 13:04:09.586241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:22:18.142 [2024-08-11 13:04:09.586257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.587431] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:18.142 [2024-08-11 13:04:09.589709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.589756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:18.142 [2024-08-11 13:04:09.589776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:22:18.142 [2024-08-11 13:04:09.589788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.589888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.589911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:18.142 [2024-08-11 13:04:09.589930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:18.142 [2024-08-11 13:04:09.589943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.594547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.594614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.142 [2024-08-11 13:04:09.594659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.492 ms 00:22:18.142 [2024-08-11 13:04:09.594677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.594818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.594839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.142 [2024-08-11 13:04:09.594853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:18.142 [2024-08-11 13:04:09.594865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.594983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.595004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:18.142 [2024-08-11 13:04:09.595018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:18.142 [2024-08-11 13:04:09.595029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.595073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:18.142 [2024-08-11 13:04:09.596535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.596577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.142 [2024-08-11 13:04:09.596594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:22:18.142 [2024-08-11 13:04:09.596606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.596654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.596671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:18.142 [2024-08-11 13:04:09.596684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:18.142 [2024-08-11 13:04:09.596695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.596733] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:18.142 [2024-08-11 13:04:09.596768] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:18.142 [2024-08-11 13:04:09.596820] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:18.142 [2024-08-11 13:04:09.596845] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:18.142 [2024-08-11 13:04:09.596999] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:18.142 [2024-08-11 13:04:09.597030] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:18.142 [2024-08-11 13:04:09.597046] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:18.142 [2024-08-11 13:04:09.597061] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:18.142 [2024-08-11 13:04:09.597084] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:18.142 [2024-08-11 13:04:09.597096] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:18.142 [2024-08-11 13:04:09.597107] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:18.142 [2024-08-11 13:04:09.597122] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:18.142 [2024-08-11 13:04:09.597140] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:18.142 [2024-08-11 13:04:09.597177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.597190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:18.142 [2024-08-11 13:04:09.597220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:22:18.142 [2024-08-11 13:04:09.597231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.597334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.142 [2024-08-11 13:04:09.597354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:18.142 [2024-08-11 13:04:09.597367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:18.142 [2024-08-11 13:04:09.597378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.142 [2024-08-11 13:04:09.597491] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:18.142 [2024-08-11 13:04:09.597510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:18.142 [2024-08-11 13:04:09.597523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.143 [2024-08-11 13:04:09.597534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:18.143 [2024-08-11 13:04:09.597572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:18.143 [2024-08-11 13:04:09.597611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:18.143 [2024-08-11 13:04:09.597631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.143 [2024-08-11 13:04:09.597655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:18.143 [2024-08-11 13:04:09.597677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:18.143 [2024-08-11 13:04:09.597689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.143 [2024-08-11 13:04:09.597700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:18.143 [2024-08-11 13:04:09.597711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:18.143 [2024-08-11 13:04:09.597721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:18.143 [2024-08-11 13:04:09.597744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:18.143 [2024-08-11 13:04:09.597754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:18.143 [2024-08-11 13:04:09.597775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.143 [2024-08-11 13:04:09.597797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:18.143 [2024-08-11 13:04:09.597807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.143 [2024-08-11 13:04:09.597827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:18.143 [2024-08-11 13:04:09.597838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.143 [2024-08-11 13:04:09.597908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:18.143 [2024-08-11 13:04:09.597922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.143 [2024-08-11 13:04:09.597942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:18.143 [2024-08-11 13:04:09.597953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:18.143 [2024-08-11 13:04:09.597964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.143 [2024-08-11 13:04:09.597978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:18.143 [2024-08-11 13:04:09.597997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:18.143 [2024-08-11 13:04:09.598016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.143 [2024-08-11 13:04:09.598035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:18.143 [2024-08-11 13:04:09.598048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:18.143 [2024-08-11 13:04:09.598059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.143 [2024-08-11 13:04:09.598069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:18.143 [2024-08-11 13:04:09.598079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:18.143 [2024-08-11 13:04:09.598089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.143 [2024-08-11 13:04:09.598105] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:18.143 [2024-08-11 13:04:09.598118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:18.143 [2024-08-11 13:04:09.598129] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.143 [2024-08-11 13:04:09.598144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.143 [2024-08-11 13:04:09.598156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:18.143 [2024-08-11 13:04:09.598167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:18.143 [2024-08-11 13:04:09.598179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:18.143 [2024-08-11 13:04:09.598190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:18.143 [2024-08-11 13:04:09.598200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:18.143 [2024-08-11 13:04:09.598211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:18.143 [2024-08-11 13:04:09.598223] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:18.143 [2024-08-11 13:04:09.598237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.143 [2024-08-11 13:04:09.598250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:18.143 [2024-08-11 13:04:09.598262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:18.143 [2024-08-11 13:04:09.598273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:18.143 [2024-08-11 13:04:09.598285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:18.143 [2024-08-11 13:04:09.598299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:18.143 [2024-08-11 13:04:09.598312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:18.143 [2024-08-11 13:04:09.598323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:18.143 [2024-08-11 13:04:09.598356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:18.143 [2024-08-11 13:04:09.598376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:18.143 [2024-08-11 13:04:09.598389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:18.143 [2024-08-11 13:04:09.598400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:18.143 [2024-08-11 13:04:09.598412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:18.143 [2024-08-11 13:04:09.598423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:18.143 [2024-08-11 13:04:09.598435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:18.143 [2024-08-11 13:04:09.598446] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:18.143 [2024-08-11 13:04:09.598459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.143 [2024-08-11 13:04:09.598471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:18.143 [2024-08-11 13:04:09.598483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:18.143 [2024-08-11 13:04:09.598494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:18.143 [2024-08-11 13:04:09.598506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:18.143 [2024-08-11 13:04:09.598524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.143 [2024-08-11 13:04:09.598538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:18.143 [2024-08-11 13:04:09.598550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:22:18.143 [2024-08-11 13:04:09.598572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.143 [2024-08-11 13:04:09.619817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.143 [2024-08-11 13:04:09.619925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.143 [2024-08-11 13:04:09.619956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.171 ms 00:22:18.143 [2024-08-11 13:04:09.619980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.143 [2024-08-11 13:04:09.620107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.143 [2024-08-11 13:04:09.620125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:18.143 [2024-08-11 13:04:09.620139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:18.143 [2024-08-11 13:04:09.620151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.143 [2024-08-11 13:04:09.628326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.143 [2024-08-11 13:04:09.628403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.143 [2024-08-11 13:04:09.628425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.071 ms 00:22:18.143 [2024-08-11 13:04:09.628437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.143 [2024-08-11 13:04:09.628511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.143 [2024-08-11 13:04:09.628528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.144 [2024-08-11 13:04:09.628542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:18.144 [2024-08-11 13:04:09.628553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.628949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.629006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.144 [2024-08-11 13:04:09.629029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:22:18.144 [2024-08-11 13:04:09.629041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.629205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.629232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.144 [2024-08-11 13:04:09.629245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:22:18.144 [2024-08-11 13:04:09.629257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.634171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.634243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.144 [2024-08-11 13:04:09.634263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:22:18.144 [2024-08-11 13:04:09.634275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.636721] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:18.144 [2024-08-11 13:04:09.636936] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:18.144 [2024-08-11 13:04:09.636982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.636997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:18.144 [2024-08-11 13:04:09.637011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.483 ms 00:22:18.144 [2024-08-11 13:04:09.637022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.653188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.653318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:18.144 [2024-08-11 13:04:09.653345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.107 ms 00:22:18.144 [2024-08-11 13:04:09.653368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.655782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.655829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:18.144 [2024-08-11 13:04:09.655857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.315 ms 00:22:18.144 [2024-08-11 13:04:09.655893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.657612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.657655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:18.144 [2024-08-11 13:04:09.657672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:22:18.144 [2024-08-11 13:04:09.657683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.658118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.658147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:18.144 [2024-08-11 13:04:09.658170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:22:18.144 [2024-08-11 13:04:09.658190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.676211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.676304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:18.144 [2024-08-11 13:04:09.676327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.982 ms 00:22:18.144 [2024-08-11 13:04:09.676340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.685068] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:18.144 [2024-08-11 13:04:09.688053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.688104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:18.144 [2024-08-11 13:04:09.688131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.615 ms 00:22:18.144 [2024-08-11 13:04:09.688143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.688257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.688278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:18.144 [2024-08-11 13:04:09.688296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:18.144 [2024-08-11 13:04:09.688309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.688472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.688494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:18.144 [2024-08-11 13:04:09.688520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:18.144 [2024-08-11 13:04:09.688537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.688573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.688590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:18.144 [2024-08-11 13:04:09.688603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:18.144 [2024-08-11 13:04:09.688613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.688664] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:18.144 [2024-08-11 13:04:09.688684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.688696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:18.144 [2024-08-11 13:04:09.688709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:18.144 [2024-08-11 13:04:09.688724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.692379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.692431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:18.144 [2024-08-11 13:04:09.692450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:22:18.144 [2024-08-11 13:04:09.692463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.692546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.144 [2024-08-11 13:04:09.692566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:18.144 [2024-08-11 13:04:09.692579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:18.144 [2024-08-11 13:04:09.692590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.144 [2024-08-11 13:04:09.693769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.070 ms, result 0 00:22:58.588  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (26 MBps) Copying: 131/1024 [MB] (26 MBps) Copying: 157/1024 [MB] (26 MBps) Copying: 183/1024 [MB] (25 MBps) Copying: 209/1024 [MB] (26 MBps) Copying: 235/1024 [MB] (25 MBps) Copying: 261/1024 [MB] (26 MBps) Copying: 288/1024 [MB] (26 MBps) Copying: 314/1024 [MB] (26 MBps) Copying: 340/1024 [MB] (26 MBps) Copying: 366/1024 [MB] (25 MBps) Copying: 392/1024 [MB] (26 MBps) Copying: 419/1024 [MB] (26 MBps) Copying: 446/1024 [MB] (26 MBps) Copying: 472/1024 [MB] (26 MBps) Copying: 499/1024 [MB] (26 MBps) Copying: 525/1024 [MB] (25 MBps) Copying: 551/1024 [MB] (25 MBps) Copying: 576/1024 [MB] (25 MBps) Copying: 601/1024 [MB] (24 MBps) Copying: 625/1024 [MB] (23 MBps) Copying: 651/1024 [MB] (26 MBps) Copying: 677/1024 [MB] (26 MBps) Copying: 704/1024 [MB] (26 MBps) Copying: 730/1024 [MB] (26 MBps) Copying: 757/1024 [MB] (27 MBps) Copying: 783/1024 [MB] (26 MBps) Copying: 810/1024 [MB] (26 MBps) Copying: 836/1024 [MB] (26 MBps) Copying: 862/1024 [MB] (25 MBps) Copying: 888/1024 [MB] (26 MBps) Copying: 915/1024 [MB] (26 MBps) Copying: 941/1024 [MB] (26 MBps) Copying: 968/1024 [MB] (26 MBps) Copying: 994/1024 [MB] (26 MBps) Copying: 1021/1024 [MB] (26 MBps) Copying: 1048328/1048576 [kB] (2808 kBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-08-11 13:04:50.032737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.588 [2024-08-11 13:04:50.032826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:58.588 [2024-08-11 13:04:50.032850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:58.588 [2024-08-11 13:04:50.032862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.588 [2024-08-11 13:04:50.035117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:58.588 [2024-08-11 13:04:50.039414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.588 [2024-08-11 13:04:50.039468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:58.588 [2024-08-11 13:04:50.039488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:22:58.588 [2024-08-11 13:04:50.039500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.588 [2024-08-11 13:04:50.052078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.588 [2024-08-11 13:04:50.052141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:58.588 [2024-08-11 13:04:50.052162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.370 ms 00:22:58.588 [2024-08-11 13:04:50.052193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.588 [2024-08-11 13:04:50.073663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.588 [2024-08-11 13:04:50.073745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:58.588 [2024-08-11 13:04:50.073765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.440 ms 00:22:58.588 [2024-08-11 13:04:50.073778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.588 [2024-08-11 13:04:50.080520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.588 [2024-08-11 13:04:50.080581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:58.588 [2024-08-11 13:04:50.080598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.692 ms 00:22:58.589 [2024-08-11 13:04:50.080610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.589 [2024-08-11 13:04:50.082118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.589 [2024-08-11 13:04:50.082159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:58.589 [2024-08-11 13:04:50.082174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:22:58.589 [2024-08-11 13:04:50.082186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.589 [2024-08-11 13:04:50.085260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.589 [2024-08-11 13:04:50.085451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:58.589 [2024-08-11 13:04:50.085482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.036 ms 00:22:58.589 [2024-08-11 13:04:50.085513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.847 [2024-08-11 13:04:50.194604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.847 [2024-08-11 13:04:50.194732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:58.847 [2024-08-11 13:04:50.194769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.031 ms 00:22:58.847 [2024-08-11 13:04:50.194782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.847 [2024-08-11 13:04:50.196634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.847 [2024-08-11 13:04:50.196820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:58.847 [2024-08-11 13:04:50.196846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.815 ms 00:22:58.847 [2024-08-11 13:04:50.196859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.847 [2024-08-11 13:04:50.198286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.847 [2024-08-11 13:04:50.198318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:58.847 [2024-08-11 13:04:50.198333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:22:58.847 [2024-08-11 13:04:50.198344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.847 [2024-08-11 13:04:50.199459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.847 [2024-08-11 13:04:50.199497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:58.847 [2024-08-11 13:04:50.199513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:22:58.847 [2024-08-11 13:04:50.199524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.847 [2024-08-11 13:04:50.200575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.847 [2024-08-11 13:04:50.200615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:58.847 [2024-08-11 13:04:50.200631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:22:58.847 [2024-08-11 13:04:50.200642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.847 [2024-08-11 13:04:50.200679] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:58.847 [2024-08-11 13:04:50.200702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:22:58.847 [2024-08-11 13:04:50.200717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:58.847 [2024-08-11 13:04:50.200729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.200990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:58.848 [2024-08-11 13:04:50.201845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:58.849 [2024-08-11 13:04:50.201856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:58.849 [2024-08-11 13:04:50.201881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:58.849 [2024-08-11 13:04:50.201895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:58.849 [2024-08-11 13:04:50.201907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:58.849 [2024-08-11 13:04:50.201919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:58.849 [2024-08-11 13:04:50.201931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:58.849 [2024-08-11 13:04:50.201946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:58.849 [2024-08-11 13:04:50.201967] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:58.849 [2024-08-11 13:04:50.201990] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18da6a89-0369-4fbb-b156-181fe149ceb6 00:22:58.849 [2024-08-11 13:04:50.202003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:22:58.849 [2024-08-11 13:04:50.202014] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131264 00:22:58.849 [2024-08-11 13:04:50.202024] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:22:58.849 [2024-08-11 13:04:50.202042] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:22:58.849 [2024-08-11 13:04:50.202053] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:58.849 [2024-08-11 13:04:50.202064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:58.849 [2024-08-11 13:04:50.202075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:58.849 [2024-08-11 13:04:50.202085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:58.849 [2024-08-11 13:04:50.202096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:58.849 [2024-08-11 13:04:50.202107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.849 [2024-08-11 13:04:50.202119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:58.849 [2024-08-11 13:04:50.202131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:22:58.849 [2024-08-11 13:04:50.202151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.203508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.849 [2024-08-11 13:04:50.203542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:58.849 [2024-08-11 13:04:50.203556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:22:58.849 [2024-08-11 13:04:50.203568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.203679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.849 [2024-08-11 13:04:50.203703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:58.849 [2024-08-11 13:04:50.203716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:58.849 [2024-08-11 13:04:50.203727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.208310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.208357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.849 [2024-08-11 13:04:50.208373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.208385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.208456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.208472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.849 [2024-08-11 13:04:50.208483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.208494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.208551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.208576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.849 [2024-08-11 13:04:50.208588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.208599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.208621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.208635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.849 [2024-08-11 13:04:50.208647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.208658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.217758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.218083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.849 [2024-08-11 13:04:50.218113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.218126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.224617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.224690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.849 [2024-08-11 13:04:50.224710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.224722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.224802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.224819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.849 [2024-08-11 13:04:50.224844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.224855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.224940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.224960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.849 [2024-08-11 13:04:50.224973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.224998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.225106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.225126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.849 [2024-08-11 13:04:50.225139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.225157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.225217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.225235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:58.849 [2024-08-11 13:04:50.225247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.225258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.225304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.225318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.849 [2024-08-11 13:04:50.225330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.225349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.225400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.849 [2024-08-11 13:04:50.225417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.849 [2024-08-11 13:04:50.225429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.849 [2024-08-11 13:04:50.225439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.849 [2024-08-11 13:04:50.225605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 193.895 ms, result 0 00:22:59.417 00:22:59.417 00:22:59.417 13:04:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:01.947 13:04:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:01.947 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:01.947 [2024-08-11 13:04:53.207273] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:23:01.947 [2024-08-11 13:04:53.207417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88726 ] 00:23:01.947 [2024-08-11 13:04:53.354596] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.947 [2024-08-11 13:04:53.396590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.947 [2024-08-11 13:04:53.485850] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:01.947 [2024-08-11 13:04:53.485971] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.207 [2024-08-11 13:04:53.645894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.645971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.207 [2024-08-11 13:04:53.645993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.207 [2024-08-11 13:04:53.646004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.646099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.646121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.207 [2024-08-11 13:04:53.646144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:02.207 [2024-08-11 13:04:53.646156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.646188] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.207 [2024-08-11 13:04:53.646552] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.207 [2024-08-11 13:04:53.646580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.646593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.207 [2024-08-11 13:04:53.646605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:23:02.207 [2024-08-11 13:04:53.646616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.647764] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:02.207 [2024-08-11 13:04:53.650089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.650132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:02.207 [2024-08-11 13:04:53.650150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:23:02.207 [2024-08-11 13:04:53.650162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.650252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.650273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:02.207 [2024-08-11 13:04:53.650287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:02.207 [2024-08-11 13:04:53.650297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.654738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.654813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.207 [2024-08-11 13:04:53.654832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.349 ms 00:23:02.207 [2024-08-11 13:04:53.654843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.655023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.655048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.207 [2024-08-11 13:04:53.655062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:02.207 [2024-08-11 13:04:53.655078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.655193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.655212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.207 [2024-08-11 13:04:53.655224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:02.207 [2024-08-11 13:04:53.655246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.655283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.207 [2024-08-11 13:04:53.656695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.656736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.207 [2024-08-11 13:04:53.656753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:23:02.207 [2024-08-11 13:04:53.656771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.656816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.207 [2024-08-11 13:04:53.656831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.207 [2024-08-11 13:04:53.656844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:02.207 [2024-08-11 13:04:53.656882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.207 [2024-08-11 13:04:53.656916] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:02.207 [2024-08-11 13:04:53.656971] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:02.207 [2024-08-11 13:04:53.657035] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:02.207 [2024-08-11 13:04:53.657063] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:02.207 [2024-08-11 13:04:53.657173] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.207 [2024-08-11 13:04:53.657200] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.207 [2024-08-11 13:04:53.657216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:02.207 [2024-08-11 13:04:53.657239] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.207 [2024-08-11 13:04:53.657253] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.207 [2024-08-11 13:04:53.657265] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:02.207 [2024-08-11 13:04:53.657276] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.207 [2024-08-11 13:04:53.657286] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.208 [2024-08-11 13:04:53.657301] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.208 [2024-08-11 13:04:53.657314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.208 [2024-08-11 13:04:53.657325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.208 [2024-08-11 13:04:53.657338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:23:02.208 [2024-08-11 13:04:53.657349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.208 [2024-08-11 13:04:53.657451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.208 [2024-08-11 13:04:53.657467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.208 [2024-08-11 13:04:53.657491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:02.208 [2024-08-11 13:04:53.657506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.208 [2024-08-11 13:04:53.657628] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.208 [2024-08-11 13:04:53.657647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.208 [2024-08-11 13:04:53.657660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.208 [2024-08-11 13:04:53.657671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.208 [2024-08-11 13:04:53.657693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:02.208 [2024-08-11 13:04:53.657715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.208 [2024-08-11 13:04:53.657725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.208 [2024-08-11 13:04:53.657745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.208 [2024-08-11 13:04:53.657755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:02.208 [2024-08-11 13:04:53.657765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.208 [2024-08-11 13:04:53.657775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.208 [2024-08-11 13:04:53.657786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:02.208 [2024-08-11 13:04:53.657800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.208 [2024-08-11 13:04:53.657822] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:02.208 [2024-08-11 13:04:53.657832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.208 [2024-08-11 13:04:53.657852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.208 [2024-08-11 13:04:53.657892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.208 [2024-08-11 13:04:53.657904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.208 [2024-08-11 13:04:53.657924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.208 [2024-08-11 13:04:53.657934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.208 [2024-08-11 13:04:53.657953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.208 [2024-08-11 13:04:53.657963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:02.208 [2024-08-11 13:04:53.657973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.208 [2024-08-11 13:04:53.657987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.208 [2024-08-11 13:04:53.657998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:02.208 [2024-08-11 13:04:53.658008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.208 [2024-08-11 13:04:53.658018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.208 [2024-08-11 13:04:53.658028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:02.208 [2024-08-11 13:04:53.658038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.208 [2024-08-11 13:04:53.658048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.208 [2024-08-11 13:04:53.658058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:02.208 [2024-08-11 13:04:53.658068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.208 [2024-08-11 13:04:53.658078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.208 [2024-08-11 13:04:53.658088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:02.208 [2024-08-11 13:04:53.658098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.208 [2024-08-11 13:04:53.658108] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.208 [2024-08-11 13:04:53.658129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.208 [2024-08-11 13:04:53.658140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.208 [2024-08-11 13:04:53.658151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.208 [2024-08-11 13:04:53.658165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.208 [2024-08-11 13:04:53.658177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.208 [2024-08-11 13:04:53.658187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.208 [2024-08-11 13:04:53.658197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.208 [2024-08-11 13:04:53.658207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.208 [2024-08-11 13:04:53.658218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.208 [2024-08-11 13:04:53.658230] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.208 [2024-08-11 13:04:53.658244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.208 [2024-08-11 13:04:53.658256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:02.208 [2024-08-11 13:04:53.658268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:02.208 [2024-08-11 13:04:53.658279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:02.208 [2024-08-11 13:04:53.658290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:02.208 [2024-08-11 13:04:53.658301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:02.208 [2024-08-11 13:04:53.658312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:02.208 [2024-08-11 13:04:53.658323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:02.208 [2024-08-11 13:04:53.658334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:02.208 [2024-08-11 13:04:53.658349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:02.208 [2024-08-11 13:04:53.658361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:02.208 [2024-08-11 13:04:53.658372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:02.208 [2024-08-11 13:04:53.658394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:02.208 [2024-08-11 13:04:53.658406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:02.208 [2024-08-11 13:04:53.658417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:02.208 [2024-08-11 13:04:53.658428] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.208 [2024-08-11 13:04:53.658445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.208 [2024-08-11 13:04:53.658457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.208 [2024-08-11 13:04:53.658469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.208 [2024-08-11 13:04:53.658480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.208 [2024-08-11 13:04:53.658491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.208 [2024-08-11 13:04:53.658503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.208 [2024-08-11 13:04:53.658514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.208 [2024-08-11 13:04:53.658527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:23:02.208 [2024-08-11 13:04:53.658537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.208 [2024-08-11 13:04:53.674804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.208 [2024-08-11 13:04:53.674900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.208 [2024-08-11 13:04:53.674924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.201 ms 00:23:02.208 [2024-08-11 13:04:53.674936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.208 [2024-08-11 13:04:53.675070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.208 [2024-08-11 13:04:53.675101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.208 [2024-08-11 13:04:53.675122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:02.208 [2024-08-11 13:04:53.675135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.208 [2024-08-11 13:04:53.683087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.208 [2024-08-11 13:04:53.683157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.208 [2024-08-11 13:04:53.683177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.846 ms 00:23:02.208 [2024-08-11 13:04:53.683189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.208 [2024-08-11 13:04:53.683268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.208 [2024-08-11 13:04:53.683283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.208 [2024-08-11 13:04:53.683308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.209 [2024-08-11 13:04:53.683319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.683671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.683690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.209 [2024-08-11 13:04:53.683703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:02.209 [2024-08-11 13:04:53.683726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.683913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.683941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.209 [2024-08-11 13:04:53.683955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:23:02.209 [2024-08-11 13:04:53.683974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.688655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.688721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.209 [2024-08-11 13:04:53.688740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.651 ms 00:23:02.209 [2024-08-11 13:04:53.688751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.691135] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:02.209 [2024-08-11 13:04:53.691324] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:02.209 [2024-08-11 13:04:53.691350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.691364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:02.209 [2024-08-11 13:04:53.691377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.372 ms 00:23:02.209 [2024-08-11 13:04:53.691388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.707477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.707573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:02.209 [2024-08-11 13:04:53.707594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.033 ms 00:23:02.209 [2024-08-11 13:04:53.707606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.709965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.710145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:02.209 [2024-08-11 13:04:53.710175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:23:02.209 [2024-08-11 13:04:53.710187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.711912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.711954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:02.209 [2024-08-11 13:04:53.711970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:23:02.209 [2024-08-11 13:04:53.711981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.712404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.712433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.209 [2024-08-11 13:04:53.712448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:23:02.209 [2024-08-11 13:04:53.712459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.730245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.730333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:02.209 [2024-08-11 13:04:53.730354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.748 ms 00:23:02.209 [2024-08-11 13:04:53.730366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.738935] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:02.209 [2024-08-11 13:04:53.741917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.741971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.209 [2024-08-11 13:04:53.741991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.447 ms 00:23:02.209 [2024-08-11 13:04:53.742002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.742106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.742129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:02.209 [2024-08-11 13:04:53.742160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:02.209 [2024-08-11 13:04:53.742171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.744034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.744075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.209 [2024-08-11 13:04:53.744091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:23:02.209 [2024-08-11 13:04:53.744102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.744143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.744172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.209 [2024-08-11 13:04:53.744189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:02.209 [2024-08-11 13:04:53.744204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.744250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:02.209 [2024-08-11 13:04:53.744266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.744277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:02.209 [2024-08-11 13:04:53.744289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:02.209 [2024-08-11 13:04:53.744300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.747837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.747907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.209 [2024-08-11 13:04:53.747925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.510 ms 00:23:02.209 [2024-08-11 13:04:53.747946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.748036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.209 [2024-08-11 13:04:53.748055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.209 [2024-08-11 13:04:53.748068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:02.209 [2024-08-11 13:04:53.748079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.209 [2024-08-11 13:04:53.755457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.675 ms, result 0 00:23:40.092  Copying: 872/1048576 [kB] (872 kBps) Copying: 4308/1048576 [kB] (3436 kBps) Copying: 24/1024 [MB] (20 MBps) Copying: 53/1024 [MB] (28 MBps) Copying: 82/1024 [MB] (29 MBps) Copying: 111/1024 [MB] (28 MBps) Copying: 139/1024 [MB] (28 MBps) Copying: 169/1024 [MB] (29 MBps) Copying: 198/1024 [MB] (29 MBps) Copying: 227/1024 [MB] (29 MBps) Copying: 257/1024 [MB] (29 MBps) Copying: 286/1024 [MB] (29 MBps) Copying: 314/1024 [MB] (28 MBps) Copying: 344/1024 [MB] (29 MBps) Copying: 373/1024 [MB] (29 MBps) Copying: 402/1024 [MB] (29 MBps) Copying: 431/1024 [MB] (28 MBps) Copying: 459/1024 [MB] (27 MBps) Copying: 487/1024 [MB] (28 MBps) Copying: 516/1024 [MB] (29 MBps) Copying: 546/1024 [MB] (29 MBps) Copying: 575/1024 [MB] (29 MBps) Copying: 604/1024 [MB] (28 MBps) Copying: 633/1024 [MB] (28 MBps) Copying: 661/1024 [MB] (28 MBps) Copying: 690/1024 [MB] (28 MBps) Copying: 718/1024 [MB] (28 MBps) Copying: 746/1024 [MB] (28 MBps) Copying: 775/1024 [MB] (28 MBps) Copying: 804/1024 [MB] (29 MBps) Copying: 833/1024 [MB] (29 MBps) Copying: 863/1024 [MB] (29 MBps) Copying: 892/1024 [MB] (29 MBps) Copying: 920/1024 [MB] (28 MBps) Copying: 950/1024 [MB] (29 MBps) Copying: 979/1024 [MB] (29 MBps) Copying: 1009/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-08-11 13:05:31.553431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.554085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:40.092 [2024-08-11 13:05:31.554238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.092 [2024-08-11 13:05:31.554292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.554451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.092 [2024-08-11 13:05:31.554974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.555109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:40.092 [2024-08-11 13:05:31.555226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:23:40.092 [2024-08-11 13:05:31.555262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.555512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.555542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:40.092 [2024-08-11 13:05:31.555561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:23:40.092 [2024-08-11 13:05:31.555572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.566546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.566630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:40.092 [2024-08-11 13:05:31.566650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.949 ms 00:23:40.092 [2024-08-11 13:05:31.566687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.573487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.573535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:40.092 [2024-08-11 13:05:31.573551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.753 ms 00:23:40.092 [2024-08-11 13:05:31.573576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.575022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.575062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:40.092 [2024-08-11 13:05:31.575078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:23:40.092 [2024-08-11 13:05:31.575089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.578312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.578378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:40.092 [2024-08-11 13:05:31.578396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:23:40.092 [2024-08-11 13:05:31.578407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.581667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.581728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:40.092 [2024-08-11 13:05:31.581752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:23:40.092 [2024-08-11 13:05:31.581764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.583360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.583398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:40.092 [2024-08-11 13:05:31.583412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:23:40.092 [2024-08-11 13:05:31.583423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.584809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.584998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:40.092 [2024-08-11 13:05:31.585025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:23:40.092 [2024-08-11 13:05:31.585037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.092 [2024-08-11 13:05:31.586179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.092 [2024-08-11 13:05:31.586214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:40.092 [2024-08-11 13:05:31.586228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:23:40.093 [2024-08-11 13:05:31.586238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.093 [2024-08-11 13:05:31.587395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.093 [2024-08-11 13:05:31.587433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:40.093 [2024-08-11 13:05:31.587449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:23:40.093 [2024-08-11 13:05:31.587459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.093 [2024-08-11 13:05:31.587495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:40.093 [2024-08-11 13:05:31.587517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:40.093 [2024-08-11 13:05:31.587532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:23:40.093 [2024-08-11 13:05:31.587545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.587994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:40.093 [2024-08-11 13:05:31.588576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:40.094 [2024-08-11 13:05:31.588750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:40.094 [2024-08-11 13:05:31.588762] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18da6a89-0369-4fbb-b156-181fe149ceb6 00:23:40.094 [2024-08-11 13:05:31.588773] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:23:40.094 [2024-08-11 13:05:31.588784] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136128 00:23:40.094 [2024-08-11 13:05:31.588794] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134144 00:23:40.094 [2024-08-11 13:05:31.588806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:23:40.094 [2024-08-11 13:05:31.588816] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:40.094 [2024-08-11 13:05:31.588837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:40.094 [2024-08-11 13:05:31.588848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:40.094 [2024-08-11 13:05:31.588858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:40.094 [2024-08-11 13:05:31.588917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:40.094 [2024-08-11 13:05:31.588934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.094 [2024-08-11 13:05:31.588946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:40.094 [2024-08-11 13:05:31.588958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:23:40.094 [2024-08-11 13:05:31.588982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.590330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.094 [2024-08-11 13:05:31.590359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:40.094 [2024-08-11 13:05:31.590381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:23:40.094 [2024-08-11 13:05:31.590397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.590494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.094 [2024-08-11 13:05:31.590511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:40.094 [2024-08-11 13:05:31.590524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:40.094 [2024-08-11 13:05:31.590549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.595126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.595176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.094 [2024-08-11 13:05:31.595192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.595203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.595278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.595293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.094 [2024-08-11 13:05:31.595305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.595316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.595375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.595392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.094 [2024-08-11 13:05:31.595415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.595431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.595453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.595475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.094 [2024-08-11 13:05:31.595487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.595497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.604777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.605085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.094 [2024-08-11 13:05:31.605250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.605305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.611915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.612219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.094 [2024-08-11 13:05:31.612341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.612390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.612499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.612692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.094 [2024-08-11 13:05:31.612747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.612786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.612945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.613019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.094 [2024-08-11 13:05:31.613172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.613228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.613386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.613449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.094 [2024-08-11 13:05:31.613555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.613602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.613799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.613862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:40.094 [2024-08-11 13:05:31.613922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.614075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.614137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.614154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.094 [2024-08-11 13:05:31.614167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.614178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.614239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.094 [2024-08-11 13:05:31.614255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.094 [2024-08-11 13:05:31.614267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.094 [2024-08-11 13:05:31.614279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.094 [2024-08-11 13:05:31.614431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.958 ms, result 0 00:23:40.353 00:23:40.353 00:23:40.353 13:05:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:42.884 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:42.884 13:05:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:42.884 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:42.884 [2024-08-11 13:05:34.131038] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:23:42.885 [2024-08-11 13:05:34.131196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89135 ] 00:23:42.885 [2024-08-11 13:05:34.271797] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.885 [2024-08-11 13:05:34.316536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.885 [2024-08-11 13:05:34.405413] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:42.885 [2024-08-11 13:05:34.405507] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:43.145 [2024-08-11 13:05:34.566577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.566661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:43.145 [2024-08-11 13:05:34.566683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:43.145 [2024-08-11 13:05:34.566696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.566802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.566824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:43.145 [2024-08-11 13:05:34.566846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:43.145 [2024-08-11 13:05:34.566859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.566920] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:43.145 [2024-08-11 13:05:34.567371] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:43.145 [2024-08-11 13:05:34.567398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.567420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:43.145 [2024-08-11 13:05:34.567434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:23:43.145 [2024-08-11 13:05:34.567454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.568572] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:43.145 [2024-08-11 13:05:34.570719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.570764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:43.145 [2024-08-11 13:05:34.570783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.153 ms 00:23:43.145 [2024-08-11 13:05:34.570795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.570895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.570917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:43.145 [2024-08-11 13:05:34.570931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:43.145 [2024-08-11 13:05:34.570942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.575240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.575304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:43.145 [2024-08-11 13:05:34.575322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:23:43.145 [2024-08-11 13:05:34.575335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.575480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.575504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:43.145 [2024-08-11 13:05:34.575522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:43.145 [2024-08-11 13:05:34.575540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.575647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.575665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:43.145 [2024-08-11 13:05:34.575679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:43.145 [2024-08-11 13:05:34.575691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.575725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:43.145 [2024-08-11 13:05:34.577123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.577298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:43.145 [2024-08-11 13:05:34.577326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:23:43.145 [2024-08-11 13:05:34.577349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.577405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.577423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:43.145 [2024-08-11 13:05:34.577436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:43.145 [2024-08-11 13:05:34.577452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.577485] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:43.145 [2024-08-11 13:05:34.577515] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:43.145 [2024-08-11 13:05:34.577583] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:43.145 [2024-08-11 13:05:34.577612] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:43.145 [2024-08-11 13:05:34.577724] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:43.145 [2024-08-11 13:05:34.577741] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:43.145 [2024-08-11 13:05:34.577757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:43.145 [2024-08-11 13:05:34.577782] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:43.145 [2024-08-11 13:05:34.577797] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:43.145 [2024-08-11 13:05:34.577809] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:43.145 [2024-08-11 13:05:34.577822] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:43.145 [2024-08-11 13:05:34.577833] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:43.145 [2024-08-11 13:05:34.577849] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:43.145 [2024-08-11 13:05:34.577862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.577899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:43.145 [2024-08-11 13:05:34.577917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:23:43.145 [2024-08-11 13:05:34.577930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.145 [2024-08-11 13:05:34.578032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.145 [2024-08-11 13:05:34.578048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:43.145 [2024-08-11 13:05:34.578061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:43.146 [2024-08-11 13:05:34.578084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.146 [2024-08-11 13:05:34.578204] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:43.146 [2024-08-11 13:05:34.578223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:43.146 [2024-08-11 13:05:34.578236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578247] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:43.146 [2024-08-11 13:05:34.578270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:43.146 [2024-08-11 13:05:34.578303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:43.146 [2024-08-11 13:05:34.578329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:43.146 [2024-08-11 13:05:34.578340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:43.146 [2024-08-11 13:05:34.578351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:43.146 [2024-08-11 13:05:34.578362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:43.146 [2024-08-11 13:05:34.578373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:43.146 [2024-08-11 13:05:34.578384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:43.146 [2024-08-11 13:05:34.578406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:43.146 [2024-08-11 13:05:34.578440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:43.146 [2024-08-11 13:05:34.578473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:43.146 [2024-08-11 13:05:34.578512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:43.146 [2024-08-11 13:05:34.578545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:43.146 [2024-08-11 13:05:34.578577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:43.146 [2024-08-11 13:05:34.578598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:43.146 [2024-08-11 13:05:34.578609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:43.146 [2024-08-11 13:05:34.578619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:43.146 [2024-08-11 13:05:34.578630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:43.146 [2024-08-11 13:05:34.578641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:43.146 [2024-08-11 13:05:34.578652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:43.146 [2024-08-11 13:05:34.578676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:43.146 [2024-08-11 13:05:34.578688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578698] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:43.146 [2024-08-11 13:05:34.578710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:43.146 [2024-08-11 13:05:34.578721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:43.146 [2024-08-11 13:05:34.578744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:43.146 [2024-08-11 13:05:34.578756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:43.146 [2024-08-11 13:05:34.578767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:43.146 [2024-08-11 13:05:34.578778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:43.146 [2024-08-11 13:05:34.578789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:43.146 [2024-08-11 13:05:34.578800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:43.146 [2024-08-11 13:05:34.578813] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:43.146 [2024-08-11 13:05:34.578827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:43.146 [2024-08-11 13:05:34.578840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:43.146 [2024-08-11 13:05:34.578852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:43.146 [2024-08-11 13:05:34.578900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:43.146 [2024-08-11 13:05:34.578917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:43.146 [2024-08-11 13:05:34.578930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:43.146 [2024-08-11 13:05:34.578941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:43.146 [2024-08-11 13:05:34.578953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:43.146 [2024-08-11 13:05:34.578966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:43.146 [2024-08-11 13:05:34.578977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:43.146 [2024-08-11 13:05:34.578990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:43.146 [2024-08-11 13:05:34.579002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:43.146 [2024-08-11 13:05:34.579027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:43.146 [2024-08-11 13:05:34.579045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:43.146 [2024-08-11 13:05:34.579057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:43.146 [2024-08-11 13:05:34.579070] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:43.146 [2024-08-11 13:05:34.579087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:43.146 [2024-08-11 13:05:34.579100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:43.146 [2024-08-11 13:05:34.579111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:43.146 [2024-08-11 13:05:34.579127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:43.146 [2024-08-11 13:05:34.579139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:43.146 [2024-08-11 13:05:34.579152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.146 [2024-08-11 13:05:34.579164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:43.146 [2024-08-11 13:05:34.579177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:23:43.146 [2024-08-11 13:05:34.579188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.146 [2024-08-11 13:05:34.596662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.146 [2024-08-11 13:05:34.596735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:43.146 [2024-08-11 13:05:34.596758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.404 ms 00:23:43.146 [2024-08-11 13:05:34.596771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.146 [2024-08-11 13:05:34.596925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.146 [2024-08-11 13:05:34.596945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:43.146 [2024-08-11 13:05:34.596973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:43.146 [2024-08-11 13:05:34.596986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.146 [2024-08-11 13:05:34.605343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.146 [2024-08-11 13:05:34.605414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:43.147 [2024-08-11 13:05:34.605435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.255 ms 00:23:43.147 [2024-08-11 13:05:34.605447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.605525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.605542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:43.147 [2024-08-11 13:05:34.605556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:43.147 [2024-08-11 13:05:34.605573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.605937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.605957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:43.147 [2024-08-11 13:05:34.605971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:23:43.147 [2024-08-11 13:05:34.605995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.606167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.606190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:43.147 [2024-08-11 13:05:34.606204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:23:43.147 [2024-08-11 13:05:34.606216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.610822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.610893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:43.147 [2024-08-11 13:05:34.610912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.577 ms 00:23:43.147 [2024-08-11 13:05:34.610924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.613323] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:43.147 [2024-08-11 13:05:34.613498] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:43.147 [2024-08-11 13:05:34.613525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.613538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:43.147 [2024-08-11 13:05:34.613553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:23:43.147 [2024-08-11 13:05:34.613565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.629678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.629778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:43.147 [2024-08-11 13:05:34.629813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.023 ms 00:23:43.147 [2024-08-11 13:05:34.629827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.632122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.632287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:43.147 [2024-08-11 13:05:34.632316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:23:43.147 [2024-08-11 13:05:34.632329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.633972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.634011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:43.147 [2024-08-11 13:05:34.634027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:23:43.147 [2024-08-11 13:05:34.634039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.634453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.634480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:43.147 [2024-08-11 13:05:34.634506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:23:43.147 [2024-08-11 13:05:34.634518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.651995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.652076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:43.147 [2024-08-11 13:05:34.652097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.451 ms 00:23:43.147 [2024-08-11 13:05:34.652110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.660593] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:43.147 [2024-08-11 13:05:34.663449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.663620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:43.147 [2024-08-11 13:05:34.663651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.254 ms 00:23:43.147 [2024-08-11 13:05:34.663666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.663777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.663810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:43.147 [2024-08-11 13:05:34.663828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:43.147 [2024-08-11 13:05:34.663840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.664580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.664619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:43.147 [2024-08-11 13:05:34.664635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:23:43.147 [2024-08-11 13:05:34.664647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.664697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.664713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:43.147 [2024-08-11 13:05:34.664726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:43.147 [2024-08-11 13:05:34.664743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.664788] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:43.147 [2024-08-11 13:05:34.664815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.664827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:43.147 [2024-08-11 13:05:34.664839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:43.147 [2024-08-11 13:05:34.664862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.668344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.668401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:43.147 [2024-08-11 13:05:34.668420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.437 ms 00:23:43.147 [2024-08-11 13:05:34.668438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.668521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.147 [2024-08-11 13:05:34.668539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:43.147 [2024-08-11 13:05:34.668552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:43.147 [2024-08-11 13:05:34.668564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.147 [2024-08-11 13:05:34.669703] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.626 ms, result 0 00:24:22.989  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (26 MBps) Copying: 80/1024 [MB] (26 MBps) Copying: 106/1024 [MB] (25 MBps) Copying: 131/1024 [MB] (24 MBps) Copying: 157/1024 [MB] (25 MBps) Copying: 182/1024 [MB] (25 MBps) Copying: 208/1024 [MB] (25 MBps) Copying: 233/1024 [MB] (25 MBps) Copying: 259/1024 [MB] (25 MBps) Copying: 285/1024 [MB] (25 MBps) Copying: 310/1024 [MB] (25 MBps) Copying: 336/1024 [MB] (25 MBps) Copying: 362/1024 [MB] (26 MBps) Copying: 389/1024 [MB] (26 MBps) Copying: 415/1024 [MB] (25 MBps) Copying: 440/1024 [MB] (25 MBps) Copying: 466/1024 [MB] (25 MBps) Copying: 492/1024 [MB] (26 MBps) Copying: 519/1024 [MB] (26 MBps) Copying: 544/1024 [MB] (25 MBps) Copying: 570/1024 [MB] (26 MBps) Copying: 596/1024 [MB] (25 MBps) Copying: 623/1024 [MB] (27 MBps) Copying: 650/1024 [MB] (26 MBps) Copying: 677/1024 [MB] (26 MBps) Copying: 704/1024 [MB] (27 MBps) Copying: 730/1024 [MB] (26 MBps) Copying: 756/1024 [MB] (26 MBps) Copying: 782/1024 [MB] (25 MBps) Copying: 808/1024 [MB] (26 MBps) Copying: 834/1024 [MB] (26 MBps) Copying: 859/1024 [MB] (25 MBps) Copying: 885/1024 [MB] (25 MBps) Copying: 911/1024 [MB] (25 MBps) Copying: 936/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (26 MBps) Copying: 989/1024 [MB] (26 MBps) Copying: 1015/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-08-11 13:06:14.355677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.355770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:22.989 [2024-08-11 13:06:14.355796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.989 [2024-08-11 13:06:14.355811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.355867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:22.989 [2024-08-11 13:06:14.356725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.356765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:22.989 [2024-08-11 13:06:14.356791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:24:22.989 [2024-08-11 13:06:14.356817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.357191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.357463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:22.989 [2024-08-11 13:06:14.357489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:24:22.989 [2024-08-11 13:06:14.357503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.362523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.362758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:22.989 [2024-08-11 13:06:14.362793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.989 ms 00:24:22.989 [2024-08-11 13:06:14.362810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.371076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.371143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:22.989 [2024-08-11 13:06:14.371188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.172 ms 00:24:22.989 [2024-08-11 13:06:14.371203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.373014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.373226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:22.989 [2024-08-11 13:06:14.373260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:24:22.989 [2024-08-11 13:06:14.373276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.376518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.376726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:22.989 [2024-08-11 13:06:14.376763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:24:22.989 [2024-08-11 13:06:14.376794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.380262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.380319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:22.989 [2024-08-11 13:06:14.380340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.404 ms 00:24:22.989 [2024-08-11 13:06:14.380356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.382191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.382240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:22.989 [2024-08-11 13:06:14.382260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:24:22.989 [2024-08-11 13:06:14.382274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.383737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.383956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:22.989 [2024-08-11 13:06:14.383987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:24:22.989 [2024-08-11 13:06:14.384002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.385396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.385441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:22.989 [2024-08-11 13:06:14.385462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:24:22.989 [2024-08-11 13:06:14.385476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.386721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.989 [2024-08-11 13:06:14.386768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:22.989 [2024-08-11 13:06:14.386787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:24:22.989 [2024-08-11 13:06:14.386801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.989 [2024-08-11 13:06:14.386844] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:22.989 [2024-08-11 13:06:14.386915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:22.989 [2024-08-11 13:06:14.386935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:24:22.989 [2024-08-11 13:06:14.386951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.386967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.386982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.386996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:22.989 [2024-08-11 13:06:14.387297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.387836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.388121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.388233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.388305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.388506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.388583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.388716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.388864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:22.990 [2024-08-11 13:06:14.389907] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:22.990 [2024-08-11 13:06:14.389950] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18da6a89-0369-4fbb-b156-181fe149ceb6 00:24:22.990 [2024-08-11 13:06:14.389966] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:24:22.990 [2024-08-11 13:06:14.389986] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:22.990 [2024-08-11 13:06:14.390010] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:22.990 [2024-08-11 13:06:14.390024] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:22.990 [2024-08-11 13:06:14.390038] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:22.990 [2024-08-11 13:06:14.390052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:22.990 [2024-08-11 13:06:14.390067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:22.990 [2024-08-11 13:06:14.390080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:22.990 [2024-08-11 13:06:14.390093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:22.990 [2024-08-11 13:06:14.390109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.990 [2024-08-11 13:06:14.390124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:22.990 [2024-08-11 13:06:14.390140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:24:22.990 [2024-08-11 13:06:14.390166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.990 [2024-08-11 13:06:14.391817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.990 [2024-08-11 13:06:14.391894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:22.990 [2024-08-11 13:06:14.391915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:24:22.990 [2024-08-11 13:06:14.391930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.990 [2024-08-11 13:06:14.392033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.990 [2024-08-11 13:06:14.392051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:22.991 [2024-08-11 13:06:14.392086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:22.991 [2024-08-11 13:06:14.392102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.397637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.397922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.991 [2024-08-11 13:06:14.398081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.398142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.398476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.398635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.991 [2024-08-11 13:06:14.398772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.398933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.399092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.399172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.991 [2024-08-11 13:06:14.399303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.399330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.399366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.399385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.991 [2024-08-11 13:06:14.399400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.399414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.409980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.410305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.991 [2024-08-11 13:06:14.410448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.410534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.418108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.418445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.991 [2024-08-11 13:06:14.418588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.418649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.418783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.418987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.991 [2024-08-11 13:06:14.419057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.419239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.419348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.419429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.991 [2024-08-11 13:06:14.419574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.419643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.419971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.420149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.991 [2024-08-11 13:06:14.420290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.420368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.420543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.420684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:22.991 [2024-08-11 13:06:14.420834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.420917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.421070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.421211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.991 [2024-08-11 13:06:14.421365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.421443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.421660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.991 [2024-08-11 13:06:14.421820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.991 [2024-08-11 13:06:14.421851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.991 [2024-08-11 13:06:14.421882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.991 [2024-08-11 13:06:14.422072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.352 ms, result 0 00:24:23.250 00:24:23.250 00:24:23.250 13:06:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:25.782 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:24:25.782 13:06:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:24:25.782 13:06:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:24:25.782 13:06:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:25.782 13:06:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:25.782 13:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:25.782 13:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:25.782 13:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:25.782 Process with pid 87335 is not found 00:24:25.782 13:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 87335 00:24:25.782 13:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 87335 ']' 00:24:25.782 13:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 87335 00:24:25.782 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (87335) - No such process 00:24:25.782 13:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 87335 is not found' 00:24:25.782 13:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:24:26.094 Remove shared memory files 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:24:26.094 00:24:26.094 real 3m37.444s 00:24:26.094 user 4m9.691s 00:24:26.094 sys 0m36.717s 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:26.094 ************************************ 00:24:26.094 END TEST ftl_dirty_shutdown 00:24:26.094 ************************************ 00:24:26.094 13:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:26.094 13:06:17 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:26.094 13:06:17 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:26.094 13:06:17 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:26.094 13:06:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:26.094 ************************************ 00:24:26.094 START TEST ftl_upgrade_shutdown 00:24:26.094 ************************************ 00:24:26.094 13:06:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:26.094 * Looking for test storage... 00:24:26.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:26.357 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:26.357 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=89624 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 89624 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 89624 ']' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:26.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:26.358 13:06:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:26.358 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:26.358 [2024-08-11 13:06:17.826496] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:26.358 [2024-08-11 13:06:17.826648] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89624 ] 00:24:26.617 [2024-08-11 13:06:17.972144] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.617 [2024-08-11 13:06:18.015478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.184 13:06:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:27.184 13:06:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:27.184 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:27.184 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:24:27.184 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:24:27.184 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:27.184 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:27.442 13:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:24:27.701 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:24:27.701 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:27.701 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:24:27.701 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:24:27.701 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:27.701 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:27.701 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:27.701 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:24:27.959 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:27.959 { 00:24:27.960 "name": "basen1", 00:24:27.960 "aliases": [ 00:24:27.960 "677fe92c-df47-4b4e-be09-ff8ccb9230df" 00:24:27.960 ], 00:24:27.960 "product_name": "NVMe disk", 00:24:27.960 "block_size": 4096, 00:24:27.960 "num_blocks": 1310720, 00:24:27.960 "uuid": "677fe92c-df47-4b4e-be09-ff8ccb9230df", 00:24:27.960 "assigned_rate_limits": { 00:24:27.960 "rw_ios_per_sec": 0, 00:24:27.960 "rw_mbytes_per_sec": 0, 00:24:27.960 "r_mbytes_per_sec": 0, 00:24:27.960 "w_mbytes_per_sec": 0 00:24:27.960 }, 00:24:27.960 "claimed": true, 00:24:27.960 "claim_type": "read_many_write_one", 00:24:27.960 "zoned": false, 00:24:27.960 "supported_io_types": { 00:24:27.960 "read": true, 00:24:27.960 "write": true, 00:24:27.960 "unmap": true, 00:24:27.960 "flush": true, 00:24:27.960 "reset": true, 00:24:27.960 "nvme_admin": true, 00:24:27.960 "nvme_io": true, 00:24:27.960 "nvme_io_md": false, 00:24:27.960 "write_zeroes": true, 00:24:27.960 "zcopy": false, 00:24:27.960 "get_zone_info": false, 00:24:27.960 "zone_management": false, 00:24:27.960 "zone_append": false, 00:24:27.960 "compare": true, 00:24:27.960 "compare_and_write": false, 00:24:27.960 "abort": true, 00:24:27.960 "seek_hole": false, 00:24:27.960 "seek_data": false, 00:24:27.960 "copy": true, 00:24:27.960 "nvme_iov_md": false 00:24:27.960 }, 00:24:27.960 "driver_specific": { 00:24:27.960 "nvme": [ 00:24:27.960 { 00:24:27.960 "pci_address": "0000:00:11.0", 00:24:27.960 "trid": { 00:24:27.960 "trtype": "PCIe", 00:24:27.960 "traddr": "0000:00:11.0" 00:24:27.960 }, 00:24:27.960 "ctrlr_data": { 00:24:27.960 "cntlid": 0, 00:24:27.960 "vendor_id": "0x1b36", 00:24:27.960 "model_number": "QEMU NVMe Ctrl", 00:24:27.960 "serial_number": "12341", 00:24:27.960 "firmware_revision": "8.0.0", 00:24:27.960 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:27.960 "oacs": { 00:24:27.960 "security": 0, 00:24:27.960 "format": 1, 00:24:27.960 "firmware": 0, 00:24:27.960 "ns_manage": 1 00:24:27.960 }, 00:24:27.960 "multi_ctrlr": false, 00:24:27.960 "ana_reporting": false 00:24:27.960 }, 00:24:27.960 "vs": { 00:24:27.960 "nvme_version": "1.4" 00:24:27.960 }, 00:24:27.960 "ns_data": { 00:24:27.960 "id": 1, 00:24:27.960 "can_share": false 00:24:27.960 } 00:24:27.960 } 00:24:27.960 ], 00:24:27.960 "mp_policy": "active_passive" 00:24:27.960 } 00:24:27.960 } 00:24:27.960 ]' 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:27.960 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:28.527 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ff63f0b7-59ec-4b52-8ace-6696b70b82cb 00:24:28.527 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:28.527 13:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff63f0b7-59ec-4b52-8ace-6696b70b82cb 00:24:28.785 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:24:29.043 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=31683eac-0975-419b-baa4-d6d5e398beb4 00:24:29.043 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 31683eac-0975-419b-baa4-d6d5e398beb4 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3335561d-2e2d-494a-b9a9-be21b8ef87f2 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3335561d-2e2d-494a-b9a9-be21b8ef87f2 ]] 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3335561d-2e2d-494a-b9a9-be21b8ef87f2 5120 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3335561d-2e2d-494a-b9a9-be21b8ef87f2 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3335561d-2e2d-494a-b9a9-be21b8ef87f2 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=3335561d-2e2d-494a-b9a9-be21b8ef87f2 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:29.302 13:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3335561d-2e2d-494a-b9a9-be21b8ef87f2 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:29.561 { 00:24:29.561 "name": "3335561d-2e2d-494a-b9a9-be21b8ef87f2", 00:24:29.561 "aliases": [ 00:24:29.561 "lvs/basen1p0" 00:24:29.561 ], 00:24:29.561 "product_name": "Logical Volume", 00:24:29.561 "block_size": 4096, 00:24:29.561 "num_blocks": 5242880, 00:24:29.561 "uuid": "3335561d-2e2d-494a-b9a9-be21b8ef87f2", 00:24:29.561 "assigned_rate_limits": { 00:24:29.561 "rw_ios_per_sec": 0, 00:24:29.561 "rw_mbytes_per_sec": 0, 00:24:29.561 "r_mbytes_per_sec": 0, 00:24:29.561 "w_mbytes_per_sec": 0 00:24:29.561 }, 00:24:29.561 "claimed": false, 00:24:29.561 "zoned": false, 00:24:29.561 "supported_io_types": { 00:24:29.561 "read": true, 00:24:29.561 "write": true, 00:24:29.561 "unmap": true, 00:24:29.561 "flush": false, 00:24:29.561 "reset": true, 00:24:29.561 "nvme_admin": false, 00:24:29.561 "nvme_io": false, 00:24:29.561 "nvme_io_md": false, 00:24:29.561 "write_zeroes": true, 00:24:29.561 "zcopy": false, 00:24:29.561 "get_zone_info": false, 00:24:29.561 "zone_management": false, 00:24:29.561 "zone_append": false, 00:24:29.561 "compare": false, 00:24:29.561 "compare_and_write": false, 00:24:29.561 "abort": false, 00:24:29.561 "seek_hole": true, 00:24:29.561 "seek_data": true, 00:24:29.561 "copy": false, 00:24:29.561 "nvme_iov_md": false 00:24:29.561 }, 00:24:29.561 "driver_specific": { 00:24:29.561 "lvol": { 00:24:29.561 "lvol_store_uuid": "31683eac-0975-419b-baa4-d6d5e398beb4", 00:24:29.561 "base_bdev": "basen1", 00:24:29.561 "thin_provision": true, 00:24:29.561 "num_allocated_clusters": 0, 00:24:29.561 "snapshot": false, 00:24:29.561 "clone": false, 00:24:29.561 "esnap_clone": false 00:24:29.561 } 00:24:29.561 } 00:24:29.561 } 00:24:29.561 ]' 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:29.561 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:24:30.128 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:24:30.128 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:24:30.128 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:24:30.128 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:24:30.128 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:24:30.128 13:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3335561d-2e2d-494a-b9a9-be21b8ef87f2 -c cachen1p0 --l2p_dram_limit 2 00:24:30.387 [2024-08-11 13:06:21.958231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.958314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:30.387 [2024-08-11 13:06:21.958342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:30.387 [2024-08-11 13:06:21.958366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.387 [2024-08-11 13:06:21.958473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.958495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:30.387 [2024-08-11 13:06:21.958512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:24:30.387 [2024-08-11 13:06:21.958532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.387 [2024-08-11 13:06:21.958571] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:30.387 [2024-08-11 13:06:21.959206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:30.387 [2024-08-11 13:06:21.959303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.959346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:30.387 [2024-08-11 13:06:21.959519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:24:30.387 [2024-08-11 13:06:21.959575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.387 [2024-08-11 13:06:21.959756] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c881ba2a-7bde-40ca-a0ec-4abdd117d4aa 00:24:30.387 [2024-08-11 13:06:21.960930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.961103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:24:30.387 [2024-08-11 13:06:21.961132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:24:30.387 [2024-08-11 13:06:21.961148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.387 [2024-08-11 13:06:21.966206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.966459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:30.387 [2024-08-11 13:06:21.966603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.987 ms 00:24:30.387 [2024-08-11 13:06:21.966661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.387 [2024-08-11 13:06:21.966855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.966937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:30.387 [2024-08-11 13:06:21.967221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:24:30.387 [2024-08-11 13:06:21.967282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.387 [2024-08-11 13:06:21.967421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.967489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:30.387 [2024-08-11 13:06:21.967536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:30.387 [2024-08-11 13:06:21.967676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.387 [2024-08-11 13:06:21.967898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:30.387 [2024-08-11 13:06:21.969619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.969775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:30.387 [2024-08-11 13:06:21.969921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.730 ms 00:24:30.387 [2024-08-11 13:06:21.970066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.387 [2024-08-11 13:06:21.970161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.387 [2024-08-11 13:06:21.970277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:30.388 [2024-08-11 13:06:21.970394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:30.388 [2024-08-11 13:06:21.970451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.388 [2024-08-11 13:06:21.970519] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:24:30.388 [2024-08-11 13:06:21.970892] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:30.388 [2024-08-11 13:06:21.970935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:30.388 [2024-08-11 13:06:21.970966] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:30.388 [2024-08-11 13:06:21.970996] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:30.388 [2024-08-11 13:06:21.971011] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:30.388 [2024-08-11 13:06:21.971026] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:30.388 [2024-08-11 13:06:21.971038] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:30.388 [2024-08-11 13:06:21.971051] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:30.388 [2024-08-11 13:06:21.971062] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:30.388 [2024-08-11 13:06:21.971079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.388 [2024-08-11 13:06:21.971091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:30.388 [2024-08-11 13:06:21.971106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.566 ms 00:24:30.388 [2024-08-11 13:06:21.971117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.388 [2024-08-11 13:06:21.971220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.388 [2024-08-11 13:06:21.971238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:30.388 [2024-08-11 13:06:21.971255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:24:30.388 [2024-08-11 13:06:21.971267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.388 [2024-08-11 13:06:21.971412] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:30.388 [2024-08-11 13:06:21.971436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:30.388 [2024-08-11 13:06:21.971453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:30.388 [2024-08-11 13:06:21.971471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.971486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:30.388 [2024-08-11 13:06:21.971497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.971511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:30.388 [2024-08-11 13:06:21.971522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:30.388 [2024-08-11 13:06:21.971535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:30.388 [2024-08-11 13:06:21.971546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.971558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:30.388 [2024-08-11 13:06:21.971570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:30.388 [2024-08-11 13:06:21.971582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.971593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:30.388 [2024-08-11 13:06:21.971609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:30.388 [2024-08-11 13:06:21.971620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.971633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:30.388 [2024-08-11 13:06:21.971644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:30.388 [2024-08-11 13:06:21.971657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.971669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:30.388 [2024-08-11 13:06:21.971682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:30.388 [2024-08-11 13:06:21.971693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:30.388 [2024-08-11 13:06:21.971706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:30.388 [2024-08-11 13:06:21.971717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:30.388 [2024-08-11 13:06:21.971730] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:30.388 [2024-08-11 13:06:21.971741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:30.388 [2024-08-11 13:06:21.971754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:30.388 [2024-08-11 13:06:21.971765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:30.388 [2024-08-11 13:06:21.971778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:30.388 [2024-08-11 13:06:21.971789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:30.388 [2024-08-11 13:06:21.971805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:30.388 [2024-08-11 13:06:21.971817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:30.388 [2024-08-11 13:06:21.971831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:30.388 [2024-08-11 13:06:21.971854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.972041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:30.388 [2024-08-11 13:06:21.972102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:30.388 [2024-08-11 13:06:21.972147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.972317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:30.388 [2024-08-11 13:06:21.972375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:30.388 [2024-08-11 13:06:21.972417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.972458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:30.388 [2024-08-11 13:06:21.972591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:30.388 [2024-08-11 13:06:21.972648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.972688] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:30.388 [2024-08-11 13:06:21.972851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:30.388 [2024-08-11 13:06:21.972931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:30.388 [2024-08-11 13:06:21.973090] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:30.388 [2024-08-11 13:06:21.973143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:30.388 [2024-08-11 13:06:21.973187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:30.388 [2024-08-11 13:06:21.973309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:30.388 [2024-08-11 13:06:21.973338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:30.388 [2024-08-11 13:06:21.973351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:30.388 [2024-08-11 13:06:21.973380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:30.388 [2024-08-11 13:06:21.973399] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:30.388 [2024-08-11 13:06:21.973426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:30.388 [2024-08-11 13:06:21.973455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:30.388 [2024-08-11 13:06:21.973493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:30.388 [2024-08-11 13:06:21.973507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:30.388 [2024-08-11 13:06:21.973519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:30.388 [2024-08-11 13:06:21.973535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:30.388 [2024-08-11 13:06:21.973630] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:30.388 [2024-08-11 13:06:21.973648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:30.388 [2024-08-11 13:06:21.973675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:30.388 [2024-08-11 13:06:21.973687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:30.388 [2024-08-11 13:06:21.973701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:30.388 [2024-08-11 13:06:21.973716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:30.388 [2024-08-11 13:06:21.973731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:30.388 [2024-08-11 13:06:21.973745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.375 ms 00:24:30.388 [2024-08-11 13:06:21.973761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:30.388 [2024-08-11 13:06:21.973827] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:30.388 [2024-08-11 13:06:21.973848] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:32.289 [2024-08-11 13:06:23.884040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.289 [2024-08-11 13:06:23.884122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:32.289 [2024-08-11 13:06:23.884145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1910.224 ms 00:24:32.289 [2024-08-11 13:06:23.884161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.892058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.892137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:32.548 [2024-08-11 13:06:23.892158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.797 ms 00:24:32.548 [2024-08-11 13:06:23.892173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.892248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.892270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:32.548 [2024-08-11 13:06:23.892284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:24:32.548 [2024-08-11 13:06:23.892298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.900753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.900836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:32.548 [2024-08-11 13:06:23.900858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.360 ms 00:24:32.548 [2024-08-11 13:06:23.900894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.900968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.900987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:32.548 [2024-08-11 13:06:23.901001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:32.548 [2024-08-11 13:06:23.901016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.901386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.901410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:32.548 [2024-08-11 13:06:23.901437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.307 ms 00:24:32.548 [2024-08-11 13:06:23.901451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.901506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.901526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:32.548 [2024-08-11 13:06:23.901539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:24:32.548 [2024-08-11 13:06:23.901552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.907310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.907633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:32.548 [2024-08-11 13:06:23.907668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.728 ms 00:24:32.548 [2024-08-11 13:06:23.907684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.917167] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:32.548 [2024-08-11 13:06:23.918080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.918263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:32.548 [2024-08-11 13:06:23.918301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.230 ms 00:24:32.548 [2024-08-11 13:06:23.918318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.940684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.940767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:24:32.548 [2024-08-11 13:06:23.940795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.294 ms 00:24:32.548 [2024-08-11 13:06:23.940809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.940972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.940995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:32.548 [2024-08-11 13:06:23.941012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:24:32.548 [2024-08-11 13:06:23.941025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.944340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.944532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:24:32.548 [2024-08-11 13:06:23.944569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.274 ms 00:24:32.548 [2024-08-11 13:06:23.944584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.947824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.948033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:24:32.548 [2024-08-11 13:06:23.948069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.160 ms 00:24:32.548 [2024-08-11 13:06:23.948083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.948448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.948470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:32.548 [2024-08-11 13:06:23.948487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:24:32.548 [2024-08-11 13:06:23.948499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.981021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.981096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:24:32.548 [2024-08-11 13:06:23.981134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.454 ms 00:24:32.548 [2024-08-11 13:06:23.981147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.985380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.985435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:24:32.548 [2024-08-11 13:06:23.985458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.170 ms 00:24:32.548 [2024-08-11 13:06:23.985472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.989358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.989418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:24:32.548 [2024-08-11 13:06:23.989440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.822 ms 00:24:32.548 [2024-08-11 13:06:23.989453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.993272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.993324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:32.548 [2024-08-11 13:06:23.993346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.762 ms 00:24:32.548 [2024-08-11 13:06:23.993358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.993422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.993451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:32.548 [2024-08-11 13:06:23.993469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:32.548 [2024-08-11 13:06:23.993481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.993598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.548 [2024-08-11 13:06:23.993617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:32.548 [2024-08-11 13:06:23.993633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:24:32.548 [2024-08-11 13:06:23.993645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.548 [2024-08-11 13:06:23.994704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2036.022 ms, result 0 00:24:32.548 { 00:24:32.548 "name": "ftl", 00:24:32.548 "uuid": "c881ba2a-7bde-40ca-a0ec-4abdd117d4aa" 00:24:32.548 } 00:24:32.548 13:06:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:24:32.807 [2024-08-11 13:06:24.308944] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.807 13:06:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:24:33.066 13:06:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:24:33.324 [2024-08-11 13:06:24.901693] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:33.583 13:06:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:24:33.583 [2024-08-11 13:06:25.154270] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:33.583 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:24:34.149 Fill FTL, iteration 1 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=89742 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 89742 /var/tmp/spdk.tgt.sock 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 89742 ']' 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:24:34.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:34.149 13:06:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:34.149 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:34.149 [2024-08-11 13:06:25.701307] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:34.149 [2024-08-11 13:06:25.701740] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89742 ] 00:24:34.407 [2024-08-11 13:06:25.850891] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.407 [2024-08-11 13:06:25.892400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.342 13:06:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:35.342 13:06:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:35.343 13:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:24:35.601 ftln1 00:24:35.601 13:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:24:35.601 13:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 89742 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 89742 ']' 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 89742 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89742 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:35.860 killing process with pid 89742 00:24:35.860 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:35.861 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89742' 00:24:35.861 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 89742 00:24:35.861 13:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 89742 00:24:36.119 13:06:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:24:36.119 13:06:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:36.119 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:36.119 [2024-08-11 13:06:27.640594] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:36.119 [2024-08-11 13:06:27.640758] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89773 ] 00:24:36.378 [2024-08-11 13:06:27.783948] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.378 [2024-08-11 13:06:27.824210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.750  Copying: 209/1024 [MB] (209 MBps) Copying: 414/1024 [MB] (205 MBps) Copying: 624/1024 [MB] (210 MBps) Copying: 831/1024 [MB] (207 MBps) Copying: 1024/1024 [MB] (average 207 MBps) 00:24:41.750 00:24:41.750 Calculate MD5 checksum, iteration 1 00:24:41.750 13:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:41.750 13:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:41.750 13:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:41.750 13:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:41.750 13:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:41.750 13:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:41.750 13:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:41.750 13:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:41.750 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:41.750 [2024-08-11 13:06:33.244315] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:41.750 [2024-08-11 13:06:33.244476] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89833 ] 00:24:42.009 [2024-08-11 13:06:33.395182] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.009 [2024-08-11 13:06:33.438409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.572  Copying: 495/1024 [MB] (495 MBps) Copying: 993/1024 [MB] (498 MBps) Copying: 1024/1024 [MB] (average 492 MBps) 00:24:44.572 00:24:44.572 13:06:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:24:44.572 13:06:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:47.120 Fill FTL, iteration 2 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7c267347b1609d20bdca49f01324db96 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:47.120 13:06:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:47.120 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:47.120 [2024-08-11 13:06:38.285704] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:47.120 [2024-08-11 13:06:38.286095] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89890 ] 00:24:47.120 [2024-08-11 13:06:38.435766] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.120 [2024-08-11 13:06:38.479431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.271  Copying: 213/1024 [MB] (213 MBps) Copying: 429/1024 [MB] (216 MBps) Copying: 641/1024 [MB] (212 MBps) Copying: 852/1024 [MB] (211 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:24:52.271 00:24:52.271 Calculate MD5 checksum, iteration 2 00:24:52.271 13:06:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:24:52.271 13:06:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:24:52.271 13:06:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:52.271 13:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:52.271 13:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:52.271 13:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:52.271 13:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:52.271 13:06:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:52.271 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:52.271 [2024-08-11 13:06:43.797361] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:52.271 [2024-08-11 13:06:43.797774] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89950 ] 00:24:52.530 [2024-08-11 13:06:43.947669] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.530 [2024-08-11 13:06:43.986339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.704  Copying: 467/1024 [MB] (467 MBps) Copying: 940/1024 [MB] (473 MBps) Copying: 1024/1024 [MB] (average 462 MBps) 00:24:55.704 00:24:55.704 13:06:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:24:55.704 13:06:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:58.237 13:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:58.237 13:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c12261b6d3e4c78293dd0328c5ed3480 00:24:58.237 13:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:58.237 13:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:58.237 13:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:58.237 [2024-08-11 13:06:49.512646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:58.237 [2024-08-11 13:06:49.512730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:58.237 [2024-08-11 13:06:49.512767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:24:58.237 [2024-08-11 13:06:49.512786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:58.237 [2024-08-11 13:06:49.512828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:58.237 [2024-08-11 13:06:49.512851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:58.237 [2024-08-11 13:06:49.512864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:58.237 [2024-08-11 13:06:49.512905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:58.237 [2024-08-11 13:06:49.512938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:58.237 [2024-08-11 13:06:49.512953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:58.237 [2024-08-11 13:06:49.512978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:58.237 [2024-08-11 13:06:49.512990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:58.237 [2024-08-11 13:06:49.513073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.417 ms, result 0 00:24:58.237 true 00:24:58.237 13:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:58.237 { 00:24:58.237 "name": "ftl", 00:24:58.237 "properties": [ 00:24:58.237 { 00:24:58.237 "name": "superblock_version", 00:24:58.237 "value": 5, 00:24:58.237 "read-only": true 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "name": "base_device", 00:24:58.237 "bands": [ 00:24:58.237 { 00:24:58.237 "id": 0, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 1, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 2, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 3, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 4, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 5, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 6, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 7, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 8, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 9, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 10, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 11, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 12, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 13, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 14, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 15, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 16, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 17, 00:24:58.237 "state": "FREE", 00:24:58.237 "validity": 0.0 00:24:58.237 } 00:24:58.237 ], 00:24:58.237 "read-only": true 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "name": "cache_device", 00:24:58.237 "type": "bdev", 00:24:58.237 "chunks": [ 00:24:58.237 { 00:24:58.237 "id": 0, 00:24:58.237 "state": "INACTIVE", 00:24:58.237 "utilization": 0.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 1, 00:24:58.237 "state": "CLOSED", 00:24:58.237 "utilization": 1.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 2, 00:24:58.237 "state": "CLOSED", 00:24:58.237 "utilization": 1.0 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 3, 00:24:58.237 "state": "OPEN", 00:24:58.237 "utilization": 0.001953125 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "id": 4, 00:24:58.237 "state": "OPEN", 00:24:58.237 "utilization": 0.0 00:24:58.237 } 00:24:58.237 ], 00:24:58.237 "read-only": true 00:24:58.237 }, 00:24:58.237 { 00:24:58.237 "name": "verbose_mode", 00:24:58.237 "value": true, 00:24:58.237 "unit": "", 00:24:58.237 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:58.237 }, 00:24:58.237 { 00:24:58.238 "name": "prep_upgrade_on_shutdown", 00:24:58.238 "value": false, 00:24:58.238 "unit": "", 00:24:58.238 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:58.238 } 00:24:58.238 ] 00:24:58.238 } 00:24:58.238 13:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:24:58.496 [2024-08-11 13:06:50.081411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:58.496 [2024-08-11 13:06:50.081488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:58.496 [2024-08-11 13:06:50.081509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:24:58.496 [2024-08-11 13:06:50.081522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:58.496 [2024-08-11 13:06:50.081560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:58.496 [2024-08-11 13:06:50.081577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:58.496 [2024-08-11 13:06:50.081589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:58.497 [2024-08-11 13:06:50.081600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:58.497 [2024-08-11 13:06:50.081628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:58.497 [2024-08-11 13:06:50.081642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:58.497 [2024-08-11 13:06:50.081654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:58.497 [2024-08-11 13:06:50.081665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:58.497 [2024-08-11 13:06:50.081744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.325 ms, result 0 00:24:58.497 true 00:24:58.755 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:24:58.755 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:58.755 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:59.014 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:24:59.014 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:24:59.014 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:59.273 [2024-08-11 13:06:50.630125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:59.273 [2024-08-11 13:06:50.630210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:59.273 [2024-08-11 13:06:50.630232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:59.273 [2024-08-11 13:06:50.630245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:59.273 [2024-08-11 13:06:50.630284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:59.273 [2024-08-11 13:06:50.630300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:59.273 [2024-08-11 13:06:50.630312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:59.273 [2024-08-11 13:06:50.630323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:59.273 [2024-08-11 13:06:50.630351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:59.273 [2024-08-11 13:06:50.630365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:59.273 [2024-08-11 13:06:50.630377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:59.273 [2024-08-11 13:06:50.630387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:59.273 [2024-08-11 13:06:50.630465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.333 ms, result 0 00:24:59.273 true 00:24:59.273 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:59.532 { 00:24:59.532 "name": "ftl", 00:24:59.532 "properties": [ 00:24:59.532 { 00:24:59.532 "name": "superblock_version", 00:24:59.532 "value": 5, 00:24:59.532 "read-only": true 00:24:59.532 }, 00:24:59.532 { 00:24:59.532 "name": "base_device", 00:24:59.532 "bands": [ 00:24:59.532 { 00:24:59.532 "id": 0, 00:24:59.532 "state": "FREE", 00:24:59.532 "validity": 0.0 00:24:59.532 }, 00:24:59.532 { 00:24:59.532 "id": 1, 00:24:59.532 "state": "FREE", 00:24:59.532 "validity": 0.0 00:24:59.532 }, 00:24:59.532 { 00:24:59.532 "id": 2, 00:24:59.532 "state": "FREE", 00:24:59.532 "validity": 0.0 00:24:59.532 }, 00:24:59.532 { 00:24:59.532 "id": 3, 00:24:59.532 "state": "FREE", 00:24:59.532 "validity": 0.0 00:24:59.532 }, 00:24:59.532 { 00:24:59.532 "id": 4, 00:24:59.532 "state": "FREE", 00:24:59.532 "validity": 0.0 00:24:59.532 }, 00:24:59.532 { 00:24:59.532 "id": 5, 00:24:59.532 "state": "FREE", 00:24:59.532 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 6, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 7, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 8, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 9, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 10, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 11, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 12, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 13, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 14, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 15, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 16, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 17, 00:24:59.533 "state": "FREE", 00:24:59.533 "validity": 0.0 00:24:59.533 } 00:24:59.533 ], 00:24:59.533 "read-only": true 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "name": "cache_device", 00:24:59.533 "type": "bdev", 00:24:59.533 "chunks": [ 00:24:59.533 { 00:24:59.533 "id": 0, 00:24:59.533 "state": "INACTIVE", 00:24:59.533 "utilization": 0.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 1, 00:24:59.533 "state": "CLOSED", 00:24:59.533 "utilization": 1.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 2, 00:24:59.533 "state": "CLOSED", 00:24:59.533 "utilization": 1.0 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 3, 00:24:59.533 "state": "OPEN", 00:24:59.533 "utilization": 0.001953125 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "id": 4, 00:24:59.533 "state": "OPEN", 00:24:59.533 "utilization": 0.0 00:24:59.533 } 00:24:59.533 ], 00:24:59.533 "read-only": true 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "name": "verbose_mode", 00:24:59.533 "value": true, 00:24:59.533 "unit": "", 00:24:59.533 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:59.533 }, 00:24:59.533 { 00:24:59.533 "name": "prep_upgrade_on_shutdown", 00:24:59.533 "value": true, 00:24:59.533 "unit": "", 00:24:59.533 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:59.533 } 00:24:59.533 ] 00:24:59.533 } 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 89624 ]] 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 89624 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 89624 ']' 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 89624 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89624 00:24:59.533 killing process with pid 89624 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89624' 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 89624 00:24:59.533 13:06:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 89624 00:24:59.533 [2024-08-11 13:06:51.076148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:24:59.533 [2024-08-11 13:06:51.082379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:59.533 [2024-08-11 13:06:51.082450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:24:59.533 [2024-08-11 13:06:51.082473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:59.533 [2024-08-11 13:06:51.082485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:59.533 [2024-08-11 13:06:51.082519] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:24:59.533 [2024-08-11 13:06:51.082991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:59.533 [2024-08-11 13:06:51.083012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:24:59.533 [2024-08-11 13:06:51.083025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.451 ms 00:24:59.533 [2024-08-11 13:06:51.083036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.668997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.669093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:09.512 [2024-08-11 13:06:59.669119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8585.979 ms 00:25:09.512 [2024-08-11 13:06:59.669134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.670762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.670832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:09.512 [2024-08-11 13:06:59.670852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.586 ms 00:25:09.512 [2024-08-11 13:06:59.670883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.672452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.672492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:09.512 [2024-08-11 13:06:59.672511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.513 ms 00:25:09.512 [2024-08-11 13:06:59.672525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.674029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.674077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:09.512 [2024-08-11 13:06:59.674096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.443 ms 00:25:09.512 [2024-08-11 13:06:59.674109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.676642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.676704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:09.512 [2024-08-11 13:06:59.676724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.485 ms 00:25:09.512 [2024-08-11 13:06:59.676738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.676841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.676863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:09.512 [2024-08-11 13:06:59.676896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:25:09.512 [2024-08-11 13:06:59.676921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.678269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.678318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:09.512 [2024-08-11 13:06:59.678337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.320 ms 00:25:09.512 [2024-08-11 13:06:59.678350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.679660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.679704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:09.512 [2024-08-11 13:06:59.679723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.263 ms 00:25:09.512 [2024-08-11 13:06:59.679735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.680942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.681143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:09.512 [2024-08-11 13:06:59.681175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.162 ms 00:25:09.512 [2024-08-11 13:06:59.681189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.682414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.682462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:09.512 [2024-08-11 13:06:59.682480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.065 ms 00:25:09.512 [2024-08-11 13:06:59.682493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.682538] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:09.512 [2024-08-11 13:06:59.682566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:09.512 [2024-08-11 13:06:59.682584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:09.512 [2024-08-11 13:06:59.682599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:09.512 [2024-08-11 13:06:59.682615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:09.512 [2024-08-11 13:06:59.682832] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:09.512 [2024-08-11 13:06:59.682846] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c881ba2a-7bde-40ca-a0ec-4abdd117d4aa 00:25:09.512 [2024-08-11 13:06:59.682862] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:09.512 [2024-08-11 13:06:59.682898] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:09.512 [2024-08-11 13:06:59.682913] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:09.512 [2024-08-11 13:06:59.682928] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:09.512 [2024-08-11 13:06:59.682941] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:09.512 [2024-08-11 13:06:59.682955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:09.512 [2024-08-11 13:06:59.682978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:09.512 [2024-08-11 13:06:59.682991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:09.512 [2024-08-11 13:06:59.683003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:09.512 [2024-08-11 13:06:59.683016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.683030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:09.512 [2024-08-11 13:06:59.683046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.481 ms 00:25:09.512 [2024-08-11 13:06:59.683059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.684729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.684946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:09.512 [2024-08-11 13:06:59.684978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.641 ms 00:25:09.512 [2024-08-11 13:06:59.684993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.685120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:09.512 [2024-08-11 13:06:59.685138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:09.512 [2024-08-11 13:06:59.685154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:25:09.512 [2024-08-11 13:06:59.685167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.691399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.512 [2024-08-11 13:06:59.691480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:09.512 [2024-08-11 13:06:59.691502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.512 [2024-08-11 13:06:59.691530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.691598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.512 [2024-08-11 13:06:59.691616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:09.512 [2024-08-11 13:06:59.691630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.512 [2024-08-11 13:06:59.691643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.691784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.512 [2024-08-11 13:06:59.691823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:09.512 [2024-08-11 13:06:59.691838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.512 [2024-08-11 13:06:59.691866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.512 [2024-08-11 13:06:59.691933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.512 [2024-08-11 13:06:59.691965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:09.513 [2024-08-11 13:06:59.691980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.692009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.702781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.513 [2024-08-11 13:06:59.702893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:09.513 [2024-08-11 13:06:59.702918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.702933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.710485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.513 [2024-08-11 13:06:59.710572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:09.513 [2024-08-11 13:06:59.710594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.710609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.710729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.513 [2024-08-11 13:06:59.710751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:09.513 [2024-08-11 13:06:59.710766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.710780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.710833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.513 [2024-08-11 13:06:59.710893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:09.513 [2024-08-11 13:06:59.710911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.710925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.711047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.513 [2024-08-11 13:06:59.711069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:09.513 [2024-08-11 13:06:59.711084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.711097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.711163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.513 [2024-08-11 13:06:59.711185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:09.513 [2024-08-11 13:06:59.711205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.711218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.711271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.513 [2024-08-11 13:06:59.711289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:09.513 [2024-08-11 13:06:59.711302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.711316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.711378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:09.513 [2024-08-11 13:06:59.711405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:09.513 [2024-08-11 13:06:59.711419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:09.513 [2024-08-11 13:06:59.711432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:09.513 [2024-08-11 13:06:59.711608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8629.242 ms, result 0 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=90134 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:10.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 90134 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 90134 ']' 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:10.448 13:07:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:10.448 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:25:10.448 [2024-08-11 13:07:01.974006] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:25:10.448 [2024-08-11 13:07:01.974325] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90134 ] 00:25:10.707 [2024-08-11 13:07:02.116725] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.707 [2024-08-11 13:07:02.154975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.965 [2024-08-11 13:07:02.404135] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:10.965 [2024-08-11 13:07:02.404489] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:10.965 [2024-08-11 13:07:02.550034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.965 [2024-08-11 13:07:02.550121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:10.965 [2024-08-11 13:07:02.550170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:10.965 [2024-08-11 13:07:02.550197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.965 [2024-08-11 13:07:02.550332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.965 [2024-08-11 13:07:02.550365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:10.965 [2024-08-11 13:07:02.550386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:25:10.965 [2024-08-11 13:07:02.550406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.965 [2024-08-11 13:07:02.550475] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:10.965 [2024-08-11 13:07:02.550956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:10.965 [2024-08-11 13:07:02.551017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.965 [2024-08-11 13:07:02.551059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:10.965 [2024-08-11 13:07:02.551096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.564 ms 00:25:10.965 [2024-08-11 13:07:02.551116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.965 [2024-08-11 13:07:02.552515] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:10.965 [2024-08-11 13:07:02.554834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.965 [2024-08-11 13:07:02.555024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:10.965 [2024-08-11 13:07:02.555066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.326 ms 00:25:10.965 [2024-08-11 13:07:02.555091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.965 [2024-08-11 13:07:02.555222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.965 [2024-08-11 13:07:02.555261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:10.965 [2024-08-11 13:07:02.555285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:25:10.965 [2024-08-11 13:07:02.555305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.965 [2024-08-11 13:07:02.560059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.965 [2024-08-11 13:07:02.560147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:10.965 [2024-08-11 13:07:02.560177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.620 ms 00:25:10.965 [2024-08-11 13:07:02.560220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.965 [2024-08-11 13:07:02.560336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.965 [2024-08-11 13:07:02.560363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:10.965 [2024-08-11 13:07:02.560384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:25:10.965 [2024-08-11 13:07:02.560403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.965 [2024-08-11 13:07:02.560549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:10.965 [2024-08-11 13:07:02.560578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:10.965 [2024-08-11 13:07:02.560600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:10.965 [2024-08-11 13:07:02.560620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:10.965 [2024-08-11 13:07:02.560683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:10.965 [2024-08-11 13:07:02.562280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.224 [2024-08-11 13:07:02.562464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:11.224 [2024-08-11 13:07:02.562505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.613 ms 00:25:11.224 [2024-08-11 13:07:02.562527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.224 [2024-08-11 13:07:02.562601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.224 [2024-08-11 13:07:02.562643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:11.224 [2024-08-11 13:07:02.562667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:11.224 [2024-08-11 13:07:02.562687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.224 [2024-08-11 13:07:02.562737] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:11.224 [2024-08-11 13:07:02.562790] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:11.224 [2024-08-11 13:07:02.562864] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:11.224 [2024-08-11 13:07:02.562935] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:25:11.224 [2024-08-11 13:07:02.563089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:11.224 [2024-08-11 13:07:02.563134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:11.224 [2024-08-11 13:07:02.563159] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:11.224 [2024-08-11 13:07:02.563184] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:11.224 [2024-08-11 13:07:02.563208] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:11.224 [2024-08-11 13:07:02.563242] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:11.224 [2024-08-11 13:07:02.563262] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:11.224 [2024-08-11 13:07:02.563282] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:11.224 [2024-08-11 13:07:02.563301] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:11.225 [2024-08-11 13:07:02.563336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.225 [2024-08-11 13:07:02.563377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:11.225 [2024-08-11 13:07:02.563399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:25:11.225 [2024-08-11 13:07:02.563419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.225 [2024-08-11 13:07:02.563574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.225 [2024-08-11 13:07:02.563603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:11.225 [2024-08-11 13:07:02.563635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:25:11.225 [2024-08-11 13:07:02.563654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.225 [2024-08-11 13:07:02.563813] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:11.225 [2024-08-11 13:07:02.563843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:11.225 [2024-08-11 13:07:02.563897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:11.225 [2024-08-11 13:07:02.563926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.563949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:11.225 [2024-08-11 13:07:02.563968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.563987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:11.225 [2024-08-11 13:07:02.564005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:11.225 [2024-08-11 13:07:02.564024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:11.225 [2024-08-11 13:07:02.564043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:11.225 [2024-08-11 13:07:02.564080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:11.225 [2024-08-11 13:07:02.564098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:11.225 [2024-08-11 13:07:02.564135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:11.225 [2024-08-11 13:07:02.564153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:11.225 [2024-08-11 13:07:02.564189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:11.225 [2024-08-11 13:07:02.564207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:11.225 [2024-08-11 13:07:02.564260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:11.225 [2024-08-11 13:07:02.564280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:11.225 [2024-08-11 13:07:02.564299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:11.225 [2024-08-11 13:07:02.564325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:11.225 [2024-08-11 13:07:02.564345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:11.225 [2024-08-11 13:07:02.564364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:11.225 [2024-08-11 13:07:02.564382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:11.225 [2024-08-11 13:07:02.564401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:11.225 [2024-08-11 13:07:02.564419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:11.225 [2024-08-11 13:07:02.564439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:11.225 [2024-08-11 13:07:02.564457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:11.225 [2024-08-11 13:07:02.564476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:11.225 [2024-08-11 13:07:02.564495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:11.225 [2024-08-11 13:07:02.564513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:11.225 [2024-08-11 13:07:02.564563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:11.225 [2024-08-11 13:07:02.564596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:11.225 [2024-08-11 13:07:02.564637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:11.225 [2024-08-11 13:07:02.564694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:11.225 [2024-08-11 13:07:02.564713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564731] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:11.225 [2024-08-11 13:07:02.564751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:11.225 [2024-08-11 13:07:02.564771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:11.225 [2024-08-11 13:07:02.564798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:11.225 [2024-08-11 13:07:02.564820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:11.225 [2024-08-11 13:07:02.564839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:11.225 [2024-08-11 13:07:02.564858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:11.225 [2024-08-11 13:07:02.564932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:11.225 [2024-08-11 13:07:02.564965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:11.225 [2024-08-11 13:07:02.564989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:11.225 [2024-08-11 13:07:02.565010] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:11.225 [2024-08-11 13:07:02.565033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:11.225 [2024-08-11 13:07:02.565074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:11.225 [2024-08-11 13:07:02.565133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:11.225 [2024-08-11 13:07:02.565152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:11.225 [2024-08-11 13:07:02.565172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:11.225 [2024-08-11 13:07:02.565191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:11.225 [2024-08-11 13:07:02.565353] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:11.225 [2024-08-11 13:07:02.565375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:11.225 [2024-08-11 13:07:02.565417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:11.225 [2024-08-11 13:07:02.565437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:11.225 [2024-08-11 13:07:02.565457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:11.225 [2024-08-11 13:07:02.565480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.225 [2024-08-11 13:07:02.565500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:11.225 [2024-08-11 13:07:02.565521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.749 ms 00:25:11.225 [2024-08-11 13:07:02.565540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.225 [2024-08-11 13:07:02.565692] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:11.225 [2024-08-11 13:07:02.565723] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:13.128 [2024-08-11 13:07:04.565675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.565978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:13.128 [2024-08-11 13:07:04.566109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1999.997 ms 00:25:13.128 [2024-08-11 13:07:04.566234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.574169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.574240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:13.128 [2024-08-11 13:07:04.574266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.692 ms 00:25:13.128 [2024-08-11 13:07:04.574279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.574368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.574400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:13.128 [2024-08-11 13:07:04.574413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:13.128 [2024-08-11 13:07:04.574434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.582922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.582990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:13.128 [2024-08-11 13:07:04.583016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.390 ms 00:25:13.128 [2024-08-11 13:07:04.583028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.583103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.583126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:13.128 [2024-08-11 13:07:04.583141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:13.128 [2024-08-11 13:07:04.583152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.583512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.583541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:13.128 [2024-08-11 13:07:04.583555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:25:13.128 [2024-08-11 13:07:04.583570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.583632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.583656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:13.128 [2024-08-11 13:07:04.583676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:25:13.128 [2024-08-11 13:07:04.583687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.589311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.589378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:13.128 [2024-08-11 13:07:04.589398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.591 ms 00:25:13.128 [2024-08-11 13:07:04.589416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.591788] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:13.128 [2024-08-11 13:07:04.591837] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:13.128 [2024-08-11 13:07:04.591900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.591918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:25:13.128 [2024-08-11 13:07:04.591931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.300 ms 00:25:13.128 [2024-08-11 13:07:04.591942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.596213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.596267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:25:13.128 [2024-08-11 13:07:04.596285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.217 ms 00:25:13.128 [2024-08-11 13:07:04.596313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.598095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.598138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:25:13.128 [2024-08-11 13:07:04.598155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.714 ms 00:25:13.128 [2024-08-11 13:07:04.598167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.599697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.599902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:25:13.128 [2024-08-11 13:07:04.599930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.481 ms 00:25:13.128 [2024-08-11 13:07:04.599943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.600390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.600432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:13.128 [2024-08-11 13:07:04.600448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.319 ms 00:25:13.128 [2024-08-11 13:07:04.600460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.629975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.630054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:13.128 [2024-08-11 13:07:04.630077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.485 ms 00:25:13.128 [2024-08-11 13:07:04.630089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.638706] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:13.128 [2024-08-11 13:07:04.639631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.639670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:13.128 [2024-08-11 13:07:04.639689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.438 ms 00:25:13.128 [2024-08-11 13:07:04.639700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.639842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.639913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:25:13.128 [2024-08-11 13:07:04.639928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:13.128 [2024-08-11 13:07:04.639939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.128 [2024-08-11 13:07:04.640013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.128 [2024-08-11 13:07:04.640031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:13.128 [2024-08-11 13:07:04.640044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:25:13.129 [2024-08-11 13:07:04.640055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.129 [2024-08-11 13:07:04.640089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.129 [2024-08-11 13:07:04.640105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:13.129 [2024-08-11 13:07:04.640123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:13.129 [2024-08-11 13:07:04.640134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.129 [2024-08-11 13:07:04.640175] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:13.129 [2024-08-11 13:07:04.640191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.129 [2024-08-11 13:07:04.640203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:13.129 [2024-08-11 13:07:04.640214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:13.129 [2024-08-11 13:07:04.640225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.129 [2024-08-11 13:07:04.643812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.129 [2024-08-11 13:07:04.643884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:13.129 [2024-08-11 13:07:04.643905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.555 ms 00:25:13.129 [2024-08-11 13:07:04.643925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.129 [2024-08-11 13:07:04.644011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.129 [2024-08-11 13:07:04.644031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:13.129 [2024-08-11 13:07:04.644043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:25:13.129 [2024-08-11 13:07:04.644068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.129 [2024-08-11 13:07:04.645393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2094.855 ms, result 0 00:25:13.129 [2024-08-11 13:07:04.660326] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.129 [2024-08-11 13:07:04.676346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:13.129 [2024-08-11 13:07:04.684479] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:13.388 13:07:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:13.388 13:07:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:13.388 13:07:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:13.388 13:07:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:13.388 13:07:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:13.646 [2024-08-11 13:07:05.052696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.646 [2024-08-11 13:07:05.052770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:13.646 [2024-08-11 13:07:05.052793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:13.646 [2024-08-11 13:07:05.052805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.646 [2024-08-11 13:07:05.052845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.646 [2024-08-11 13:07:05.052861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:13.646 [2024-08-11 13:07:05.052897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:13.646 [2024-08-11 13:07:05.052910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.646 [2024-08-11 13:07:05.052940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.646 [2024-08-11 13:07:05.052954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:13.646 [2024-08-11 13:07:05.052973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:13.646 [2024-08-11 13:07:05.052984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.646 [2024-08-11 13:07:05.053066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.364 ms, result 0 00:25:13.646 true 00:25:13.646 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:13.905 { 00:25:13.905 "name": "ftl", 00:25:13.905 "properties": [ 00:25:13.905 { 00:25:13.905 "name": "superblock_version", 00:25:13.905 "value": 5, 00:25:13.905 "read-only": true 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "name": "base_device", 00:25:13.905 "bands": [ 00:25:13.905 { 00:25:13.905 "id": 0, 00:25:13.905 "state": "CLOSED", 00:25:13.905 "validity": 1.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 1, 00:25:13.905 "state": "CLOSED", 00:25:13.905 "validity": 1.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 2, 00:25:13.905 "state": "CLOSED", 00:25:13.905 "validity": 0.007843137254901933 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 3, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 4, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 5, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 6, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 7, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 8, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 9, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 10, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 11, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 12, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 13, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 14, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 15, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 16, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 17, 00:25:13.905 "state": "FREE", 00:25:13.905 "validity": 0.0 00:25:13.905 } 00:25:13.905 ], 00:25:13.905 "read-only": true 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "name": "cache_device", 00:25:13.905 "type": "bdev", 00:25:13.905 "chunks": [ 00:25:13.905 { 00:25:13.905 "id": 0, 00:25:13.905 "state": "INACTIVE", 00:25:13.905 "utilization": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 1, 00:25:13.905 "state": "OPEN", 00:25:13.905 "utilization": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 2, 00:25:13.905 "state": "OPEN", 00:25:13.905 "utilization": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 3, 00:25:13.905 "state": "FREE", 00:25:13.905 "utilization": 0.0 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "id": 4, 00:25:13.905 "state": "FREE", 00:25:13.905 "utilization": 0.0 00:25:13.905 } 00:25:13.905 ], 00:25:13.905 "read-only": true 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "name": "verbose_mode", 00:25:13.905 "value": true, 00:25:13.905 "unit": "", 00:25:13.905 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:13.905 }, 00:25:13.905 { 00:25:13.905 "name": "prep_upgrade_on_shutdown", 00:25:13.905 "value": false, 00:25:13.905 "unit": "", 00:25:13.906 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:13.906 } 00:25:13.906 ] 00:25:13.906 } 00:25:13.906 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:25:13.906 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:13.906 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:14.183 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:25:14.183 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:25:14.183 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:25:14.183 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:25:14.183 13:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:14.454 Validate MD5 checksum, iteration 1 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:14.454 13:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:14.713 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:25:14.713 [2024-08-11 13:07:06.104808] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:25:14.713 [2024-08-11 13:07:06.105009] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90191 ] 00:25:14.713 [2024-08-11 13:07:06.256304] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.713 [2024-08-11 13:07:06.298082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.726  Copying: 481/1024 [MB] (481 MBps) Copying: 991/1024 [MB] (510 MBps) Copying: 1024/1024 [MB] (average 490 MBps) 00:25:17.726 00:25:17.984 13:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:17.984 13:07:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:20.517 Validate MD5 checksum, iteration 2 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7c267347b1609d20bdca49f01324db96 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7c267347b1609d20bdca49f01324db96 != \7\c\2\6\7\3\4\7\b\1\6\0\9\d\2\0\b\d\c\a\4\9\f\0\1\3\2\4\d\b\9\6 ]] 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:20.517 13:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:20.517 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:25:20.517 [2024-08-11 13:07:11.697885] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:25:20.517 [2024-08-11 13:07:11.698100] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90249 ] 00:25:20.517 [2024-08-11 13:07:11.854018] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.517 [2024-08-11 13:07:11.897627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.729  Copying: 486/1024 [MB] (486 MBps) Copying: 974/1024 [MB] (488 MBps) Copying: 1024/1024 [MB] (average 488 MBps) 00:25:24.729 00:25:24.729 13:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:24.729 13:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c12261b6d3e4c78293dd0328c5ed3480 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c12261b6d3e4c78293dd0328c5ed3480 != \c\1\2\2\6\1\b\6\d\3\e\4\c\7\8\2\9\3\d\d\0\3\2\8\c\5\e\d\3\4\8\0 ]] 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 90134 ]] 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 90134 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=90322 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 90322 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 90322 ']' 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:26.672 13:07:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:26.931 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:25:26.931 [2024-08-11 13:07:18.363567] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:25:26.931 [2024-08-11 13:07:18.363974] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90322 ] 00:25:26.931 [2024-08-11 13:07:18.507849] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 90134 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:25:27.190 [2024-08-11 13:07:18.547456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.450 [2024-08-11 13:07:18.800244] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:27.450 [2024-08-11 13:07:18.800334] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:27.450 [2024-08-11 13:07:18.945947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.946007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:27.450 [2024-08-11 13:07:18.946029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:27.450 [2024-08-11 13:07:18.946047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.946169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.946190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:27.450 [2024-08-11 13:07:18.946203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:25:27.450 [2024-08-11 13:07:18.946215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.946251] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:27.450 [2024-08-11 13:07:18.946606] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:27.450 [2024-08-11 13:07:18.946651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.946676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:27.450 [2024-08-11 13:07:18.946690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:25:27.450 [2024-08-11 13:07:18.946702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.947225] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:27.450 [2024-08-11 13:07:18.951200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.951268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:27.450 [2024-08-11 13:07:18.951296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.981 ms 00:25:27.450 [2024-08-11 13:07:18.951317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.952594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.952704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:27.450 [2024-08-11 13:07:18.952729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:25:27.450 [2024-08-11 13:07:18.952742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.953245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.953281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:27.450 [2024-08-11 13:07:18.953297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:25:27.450 [2024-08-11 13:07:18.953309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.953377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.953396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:27.450 [2024-08-11 13:07:18.953409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:27.450 [2024-08-11 13:07:18.953429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.953470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.953485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:27.450 [2024-08-11 13:07:18.953511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:27.450 [2024-08-11 13:07:18.953534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.953572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:27.450 [2024-08-11 13:07:18.954893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.954936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:27.450 [2024-08-11 13:07:18.954954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.302 ms 00:25:27.450 [2024-08-11 13:07:18.954979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.955044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.955061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:27.450 [2024-08-11 13:07:18.955074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:27.450 [2024-08-11 13:07:18.955085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.955137] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:27.450 [2024-08-11 13:07:18.955171] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:27.450 [2024-08-11 13:07:18.955229] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:27.450 [2024-08-11 13:07:18.955256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:25:27.450 [2024-08-11 13:07:18.955368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:27.450 [2024-08-11 13:07:18.955386] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:27.450 [2024-08-11 13:07:18.955401] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:27.450 [2024-08-11 13:07:18.955434] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:27.450 [2024-08-11 13:07:18.955448] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:27.450 [2024-08-11 13:07:18.955464] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:27.450 [2024-08-11 13:07:18.955476] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:27.450 [2024-08-11 13:07:18.955487] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:27.450 [2024-08-11 13:07:18.955498] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:27.450 [2024-08-11 13:07:18.955511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.955523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:27.450 [2024-08-11 13:07:18.955535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:25:27.450 [2024-08-11 13:07:18.955545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.955640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.450 [2024-08-11 13:07:18.955674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:27.450 [2024-08-11 13:07:18.955692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:25:27.450 [2024-08-11 13:07:18.955704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.450 [2024-08-11 13:07:18.955821] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:27.450 [2024-08-11 13:07:18.955997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:27.450 [2024-08-11 13:07:18.956055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:27.450 [2024-08-11 13:07:18.956195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.450 [2024-08-11 13:07:18.956219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:27.450 [2024-08-11 13:07:18.956232] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:27.450 [2024-08-11 13:07:18.956243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:27.450 [2024-08-11 13:07:18.956254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:27.450 [2024-08-11 13:07:18.956264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:27.450 [2024-08-11 13:07:18.956275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.450 [2024-08-11 13:07:18.956285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:27.450 [2024-08-11 13:07:18.956295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:27.450 [2024-08-11 13:07:18.956306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.450 [2024-08-11 13:07:18.956316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:27.450 [2024-08-11 13:07:18.956327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:27.450 [2024-08-11 13:07:18.956337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.450 [2024-08-11 13:07:18.956348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:27.450 [2024-08-11 13:07:18.956364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:27.450 [2024-08-11 13:07:18.956376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.450 [2024-08-11 13:07:18.956386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:27.450 [2024-08-11 13:07:18.956397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:27.450 [2024-08-11 13:07:18.956407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:27.450 [2024-08-11 13:07:18.956418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:27.450 [2024-08-11 13:07:18.956428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:27.450 [2024-08-11 13:07:18.956439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:27.450 [2024-08-11 13:07:18.956449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:27.450 [2024-08-11 13:07:18.956460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:27.450 [2024-08-11 13:07:18.956470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:27.450 [2024-08-11 13:07:18.956480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:27.450 [2024-08-11 13:07:18.956490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:27.450 [2024-08-11 13:07:18.956501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:27.450 [2024-08-11 13:07:18.956511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:27.450 [2024-08-11 13:07:18.956522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:27.450 [2024-08-11 13:07:18.956535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.450 [2024-08-11 13:07:18.956546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:27.450 [2024-08-11 13:07:18.956557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:27.450 [2024-08-11 13:07:18.956567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.450 [2024-08-11 13:07:18.956578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:27.450 [2024-08-11 13:07:18.956588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:27.451 [2024-08-11 13:07:18.956598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.451 [2024-08-11 13:07:18.956610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:27.451 [2024-08-11 13:07:18.956621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:27.451 [2024-08-11 13:07:18.956631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.451 [2024-08-11 13:07:18.956641] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:27.451 [2024-08-11 13:07:18.956658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:27.451 [2024-08-11 13:07:18.956670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:27.451 [2024-08-11 13:07:18.956682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:27.451 [2024-08-11 13:07:18.956698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:27.451 [2024-08-11 13:07:18.956709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:27.451 [2024-08-11 13:07:18.956725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:27.451 [2024-08-11 13:07:18.956737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:27.451 [2024-08-11 13:07:18.956747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:27.451 [2024-08-11 13:07:18.956758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:27.451 [2024-08-11 13:07:18.956771] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:27.451 [2024-08-11 13:07:18.956785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.956798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:27.451 [2024-08-11 13:07:18.956810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.956821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.956833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:27.451 [2024-08-11 13:07:18.956844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:27.451 [2024-08-11 13:07:18.956856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:27.451 [2024-08-11 13:07:18.956897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:27.451 [2024-08-11 13:07:18.956921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.956940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.956953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.956968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.956981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.956992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.957005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:27.451 [2024-08-11 13:07:18.957022] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:27.451 [2024-08-11 13:07:18.957036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.957058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:27.451 [2024-08-11 13:07:18.957070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:27.451 [2024-08-11 13:07:18.957081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:27.451 [2024-08-11 13:07:18.957093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:27.451 [2024-08-11 13:07:18.957107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.957121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:27.451 [2024-08-11 13:07:18.957142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.353 ms 00:25:27.451 [2024-08-11 13:07:18.957153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.964150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.964231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:27.451 [2024-08-11 13:07:18.964251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.914 ms 00:25:27.451 [2024-08-11 13:07:18.964277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.964349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.964364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:27.451 [2024-08-11 13:07:18.964377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:27.451 [2024-08-11 13:07:18.964387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.972786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.972857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:27.451 [2024-08-11 13:07:18.972918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.276 ms 00:25:27.451 [2024-08-11 13:07:18.972932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.973006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.973022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:27.451 [2024-08-11 13:07:18.973035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:27.451 [2024-08-11 13:07:18.973060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.973203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.973221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:27.451 [2024-08-11 13:07:18.973239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:25:27.451 [2024-08-11 13:07:18.973251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.973311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.973327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:27.451 [2024-08-11 13:07:18.973349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:25:27.451 [2024-08-11 13:07:18.973365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.979027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.979088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:27.451 [2024-08-11 13:07:18.979106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.630 ms 00:25:27.451 [2024-08-11 13:07:18.979125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.979293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.979314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:25:27.451 [2024-08-11 13:07:18.979328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:25:27.451 [2024-08-11 13:07:18.979339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.994075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.994190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:25:27.451 [2024-08-11 13:07:18.994224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.703 ms 00:25:27.451 [2024-08-11 13:07:18.994254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:18.996197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:18.996251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:27.451 [2024-08-11 13:07:18.996272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.489 ms 00:25:27.451 [2024-08-11 13:07:18.996288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:19.015542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:19.015630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:27.451 [2024-08-11 13:07:19.015664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.196 ms 00:25:27.451 [2024-08-11 13:07:19.015677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:19.015942] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:25:27.451 [2024-08-11 13:07:19.016090] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:25:27.451 [2024-08-11 13:07:19.016240] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:25:27.451 [2024-08-11 13:07:19.016368] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:25:27.451 [2024-08-11 13:07:19.016383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:19.016396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:25:27.451 [2024-08-11 13:07:19.016418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.613 ms 00:25:27.451 [2024-08-11 13:07:19.016430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:19.016509] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:25:27.451 [2024-08-11 13:07:19.016529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:19.016541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:25:27.451 [2024-08-11 13:07:19.016560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:25:27.451 [2024-08-11 13:07:19.016571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:19.019329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:19.019379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:25:27.451 [2024-08-11 13:07:19.019398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.724 ms 00:25:27.451 [2024-08-11 13:07:19.019411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.451 [2024-08-11 13:07:19.020162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.451 [2024-08-11 13:07:19.020202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:25:27.451 [2024-08-11 13:07:19.020218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:27.451 [2024-08-11 13:07:19.020236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.452 [2024-08-11 13:07:19.020474] ftl_nv_cache.c:2472:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:25:28.019 [2024-08-11 13:07:19.542760] ftl_nv_cache.c:2409:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:25:28.019 [2024-08-11 13:07:19.543001] ftl_nv_cache.c:2472:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:25:28.586 [2024-08-11 13:07:20.059688] ftl_nv_cache.c:2409:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:25:28.586 [2024-08-11 13:07:20.059954] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:28.586 [2024-08-11 13:07:20.059997] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:28.586 [2024-08-11 13:07:20.060065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.060097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:25:28.586 [2024-08-11 13:07:20.060126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1039.748 ms 00:25:28.586 [2024-08-11 13:07:20.060164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.060271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.060304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:25:28.586 [2024-08-11 13:07:20.060331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:28.586 [2024-08-11 13:07:20.060353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.073310] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:28.586 [2024-08-11 13:07:20.073683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.073737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:28.586 [2024-08-11 13:07:20.073769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.285 ms 00:25:28.586 [2024-08-11 13:07:20.073795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.075110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.075183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:25:28.586 [2024-08-11 13:07:20.075214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.039 ms 00:25:28.586 [2024-08-11 13:07:20.075236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.079075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.079154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:25:28.586 [2024-08-11 13:07:20.079185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.774 ms 00:25:28.586 [2024-08-11 13:07:20.079208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.079348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.079384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:25:28.586 [2024-08-11 13:07:20.079410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:28.586 [2024-08-11 13:07:20.079435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.079675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.079711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:28.586 [2024-08-11 13:07:20.079749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:25:28.586 [2024-08-11 13:07:20.079774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.079842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.079914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:28.586 [2024-08-11 13:07:20.079943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:25:28.586 [2024-08-11 13:07:20.079967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.080049] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:28.586 [2024-08-11 13:07:20.080083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.080107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:28.586 [2024-08-11 13:07:20.080151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:25:28.586 [2024-08-11 13:07:20.080181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.080308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.586 [2024-08-11 13:07:20.080339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:28.586 [2024-08-11 13:07:20.080364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:25:28.586 [2024-08-11 13:07:20.080389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.586 [2024-08-11 13:07:20.082285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1135.503 ms, result 0 00:25:28.586 [2024-08-11 13:07:20.095712] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.586 [2024-08-11 13:07:20.111942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:28.586 [2024-08-11 13:07:20.120074] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:28.586 13:07:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:28.586 13:07:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:28.586 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:28.586 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:28.586 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:25:28.586 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:28.586 Validate MD5 checksum, iteration 1 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:28.587 13:07:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:28.845 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:25:28.845 [2024-08-11 13:07:20.252435] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:25:28.845 [2024-08-11 13:07:20.252709] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90349 ] 00:25:28.845 [2024-08-11 13:07:20.414360] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.104 [2024-08-11 13:07:20.456774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.240  Copying: 455/1024 [MB] (455 MBps) Copying: 927/1024 [MB] (472 MBps) Copying: 1024/1024 [MB] (average 468 MBps) 00:25:32.240 00:25:32.240 13:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:32.240 13:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:34.772 Validate MD5 checksum, iteration 2 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7c267347b1609d20bdca49f01324db96 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7c267347b1609d20bdca49f01324db96 != \7\c\2\6\7\3\4\7\b\1\6\0\9\d\2\0\b\d\c\a\4\9\f\0\1\3\2\4\d\b\9\6 ]] 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:34.772 13:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:34.772 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:25:34.772 [2024-08-11 13:07:25.980700] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:25:34.772 [2024-08-11 13:07:25.981673] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90405 ] 00:25:34.772 [2024-08-11 13:07:26.130269] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.772 [2024-08-11 13:07:26.167058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.600  Copying: 493/1024 [MB] (493 MBps) Copying: 956/1024 [MB] (463 MBps) Copying: 1024/1024 [MB] (average 473 MBps) 00:25:37.600 00:25:37.600 13:07:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:37.600 13:07:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c12261b6d3e4c78293dd0328c5ed3480 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c12261b6d3e4c78293dd0328c5ed3480 != \c\1\2\2\6\1\b\6\d\3\e\4\c\7\8\2\9\3\d\d\0\3\2\8\c\5\e\d\3\4\8\0 ]] 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:40.130 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 90322 ]] 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 90322 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 90322 ']' 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 90322 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90322 00:25:40.131 killing process with pid 90322 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90322' 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 90322 00:25:40.131 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 90322 00:25:40.131 [2024-08-11 13:07:31.633903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:40.131 [2024-08-11 13:07:31.637451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.637502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:40.131 [2024-08-11 13:07:31.637528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:40.131 [2024-08-11 13:07:31.637541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.637575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:40.131 [2024-08-11 13:07:31.638059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.638083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:40.131 [2024-08-11 13:07:31.638097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.463 ms 00:25:40.131 [2024-08-11 13:07:31.638108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.638400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.638425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:40.131 [2024-08-11 13:07:31.638439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.240 ms 00:25:40.131 [2024-08-11 13:07:31.638451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.639707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.639744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:40.131 [2024-08-11 13:07:31.639759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.225 ms 00:25:40.131 [2024-08-11 13:07:31.639771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.641050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.641083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:40.131 [2024-08-11 13:07:31.641097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.232 ms 00:25:40.131 [2024-08-11 13:07:31.641109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.642387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.642426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:40.131 [2024-08-11 13:07:31.642456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.225 ms 00:25:40.131 [2024-08-11 13:07:31.642467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.643630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.643668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:40.131 [2024-08-11 13:07:31.643684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.104 ms 00:25:40.131 [2024-08-11 13:07:31.643703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.643830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.643889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:40.131 [2024-08-11 13:07:31.643905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:25:40.131 [2024-08-11 13:07:31.643917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.645128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.645163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:40.131 [2024-08-11 13:07:31.645177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.185 ms 00:25:40.131 [2024-08-11 13:07:31.645189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.646363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.646402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:40.131 [2024-08-11 13:07:31.646416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:25:40.131 [2024-08-11 13:07:31.646427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.648379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.648417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:40.131 [2024-08-11 13:07:31.648431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.908 ms 00:25:40.131 [2024-08-11 13:07:31.648442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.649505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.649541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:40.131 [2024-08-11 13:07:31.649556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.982 ms 00:25:40.131 [2024-08-11 13:07:31.649566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.649608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:40.131 [2024-08-11 13:07:31.649646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:40.131 [2024-08-11 13:07:31.649670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:40.131 [2024-08-11 13:07:31.649682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:40.131 [2024-08-11 13:07:31.649695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:40.131 [2024-08-11 13:07:31.649891] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:40.131 [2024-08-11 13:07:31.649905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c881ba2a-7bde-40ca-a0ec-4abdd117d4aa 00:25:40.131 [2024-08-11 13:07:31.649918] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:40.131 [2024-08-11 13:07:31.649929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:40.131 [2024-08-11 13:07:31.649946] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:40.131 [2024-08-11 13:07:31.649958] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:40.131 [2024-08-11 13:07:31.649969] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:40.131 [2024-08-11 13:07:31.649980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:40.131 [2024-08-11 13:07:31.649991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:40.131 [2024-08-11 13:07:31.650002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:40.131 [2024-08-11 13:07:31.650013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:40.131 [2024-08-11 13:07:31.650025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.650036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:40.131 [2024-08-11 13:07:31.650049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:25:40.131 [2024-08-11 13:07:31.650060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.651464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.651495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:40.131 [2024-08-11 13:07:31.651510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.378 ms 00:25:40.131 [2024-08-11 13:07:31.651521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.651653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.131 [2024-08-11 13:07:31.651679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:40.131 [2024-08-11 13:07:31.651693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:25:40.131 [2024-08-11 13:07:31.651705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.131 [2024-08-11 13:07:31.657254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.131 [2024-08-11 13:07:31.657318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:40.132 [2024-08-11 13:07:31.657336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.657348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.657407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.657422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:40.132 [2024-08-11 13:07:31.657434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.657460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.657584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.657605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:40.132 [2024-08-11 13:07:31.657618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.657639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.657665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.657689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:40.132 [2024-08-11 13:07:31.657712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.657723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.667603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.667679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:40.132 [2024-08-11 13:07:31.667698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.667711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.674450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.674521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:40.132 [2024-08-11 13:07:31.674539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.674552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.674660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.674679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:40.132 [2024-08-11 13:07:31.674715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.674726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.674778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.674794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:40.132 [2024-08-11 13:07:31.674806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.674817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.674926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.674945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:40.132 [2024-08-11 13:07:31.674964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.674975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.675027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.675051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:40.132 [2024-08-11 13:07:31.675064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.675075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.675126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.675143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:40.132 [2024-08-11 13:07:31.675155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.675174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.675229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:40.132 [2024-08-11 13:07:31.675246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:40.132 [2024-08-11 13:07:31.675259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:40.132 [2024-08-11 13:07:31.675270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.132 [2024-08-11 13:07:31.675429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 37.944 ms, result 0 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:40.391 Remove shared memory files 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid90134 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:40.391 00:25:40.391 real 1m14.283s 00:25:40.391 user 1m45.068s 00:25:40.391 sys 0m22.015s 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:40.391 ************************************ 00:25:40.391 END TEST ftl_upgrade_shutdown 00:25:40.391 ************************************ 00:25:40.391 13:07:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:40.391 13:07:31 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:25:40.391 13:07:31 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:40.391 13:07:31 ftl -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:25:40.391 13:07:31 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:40.391 13:07:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:40.391 ************************************ 00:25:40.391 START TEST ftl_restore_fast 00:25:40.391 ************************************ 00:25:40.391 13:07:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:40.650 * Looking for test storage... 00:25:40.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.xdDGBQ3mlS 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=90536 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 90536 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@827 -- # '[' -z 90536 ']' 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:40.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:40.650 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:25:40.650 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:25:40.650 [2024-08-11 13:07:32.187607] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:25:40.650 [2024-08-11 13:07:32.187765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90536 ] 00:25:40.909 [2024-08-11 13:07:32.332700] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.909 [2024-08-11 13:07:32.371156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.173 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:41.173 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # return 0 00:25:41.173 13:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:41.173 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:25:41.173 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:41.173 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:25:41.173 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:25:41.173 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:41.431 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:41.431 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:25:41.431 13:07:32 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:41.431 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:25:41.431 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:41.431 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:41.431 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:41.431 13:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:41.689 13:07:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:41.689 { 00:25:41.689 "name": "nvme0n1", 00:25:41.689 "aliases": [ 00:25:41.689 "f2c6ffda-cf3c-4e1d-886f-144e818bde50" 00:25:41.689 ], 00:25:41.689 "product_name": "NVMe disk", 00:25:41.689 "block_size": 4096, 00:25:41.689 "num_blocks": 1310720, 00:25:41.689 "uuid": "f2c6ffda-cf3c-4e1d-886f-144e818bde50", 00:25:41.689 "assigned_rate_limits": { 00:25:41.689 "rw_ios_per_sec": 0, 00:25:41.689 "rw_mbytes_per_sec": 0, 00:25:41.689 "r_mbytes_per_sec": 0, 00:25:41.689 "w_mbytes_per_sec": 0 00:25:41.689 }, 00:25:41.689 "claimed": true, 00:25:41.690 "claim_type": "read_many_write_one", 00:25:41.690 "zoned": false, 00:25:41.690 "supported_io_types": { 00:25:41.690 "read": true, 00:25:41.690 "write": true, 00:25:41.690 "unmap": true, 00:25:41.690 "flush": true, 00:25:41.690 "reset": true, 00:25:41.690 "nvme_admin": true, 00:25:41.690 "nvme_io": true, 00:25:41.690 "nvme_io_md": false, 00:25:41.690 "write_zeroes": true, 00:25:41.690 "zcopy": false, 00:25:41.690 "get_zone_info": false, 00:25:41.690 "zone_management": false, 00:25:41.690 "zone_append": false, 00:25:41.690 "compare": true, 00:25:41.690 "compare_and_write": false, 00:25:41.690 "abort": true, 00:25:41.690 "seek_hole": false, 00:25:41.690 "seek_data": false, 00:25:41.690 "copy": true, 00:25:41.690 "nvme_iov_md": false 00:25:41.690 }, 00:25:41.690 "driver_specific": { 00:25:41.690 "nvme": [ 00:25:41.690 { 00:25:41.690 "pci_address": "0000:00:11.0", 00:25:41.690 "trid": { 00:25:41.690 "trtype": "PCIe", 00:25:41.690 "traddr": "0000:00:11.0" 00:25:41.690 }, 00:25:41.690 "ctrlr_data": { 00:25:41.690 "cntlid": 0, 00:25:41.690 "vendor_id": "0x1b36", 00:25:41.690 "model_number": "QEMU NVMe Ctrl", 00:25:41.690 "serial_number": "12341", 00:25:41.690 "firmware_revision": "8.0.0", 00:25:41.690 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:41.690 "oacs": { 00:25:41.690 "security": 0, 00:25:41.690 "format": 1, 00:25:41.690 "firmware": 0, 00:25:41.690 "ns_manage": 1 00:25:41.690 }, 00:25:41.690 "multi_ctrlr": false, 00:25:41.690 "ana_reporting": false 00:25:41.690 }, 00:25:41.690 "vs": { 00:25:41.690 "nvme_version": "1.4" 00:25:41.690 }, 00:25:41.690 "ns_data": { 00:25:41.690 "id": 1, 00:25:41.690 "can_share": false 00:25:41.690 } 00:25:41.690 } 00:25:41.690 ], 00:25:41.690 "mp_policy": "active_passive" 00:25:41.690 } 00:25:41.690 } 00:25:41.690 ]' 00:25:41.690 13:07:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:41.690 13:07:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:41.690 13:07:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:41.948 13:07:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=1310720 00:25:41.948 13:07:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:25:41.948 13:07:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 5120 00:25:41.948 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:25:41.948 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:41.948 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:25:41.948 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:41.948 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:42.206 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=31683eac-0975-419b-baa4-d6d5e398beb4 00:25:42.206 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:25:42.206 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31683eac-0975-419b-baa4-d6d5e398beb4 00:25:42.463 13:07:33 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:42.721 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=f4947e8a-39b1-428f-948b-4520cb1be239 00:25:42.721 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f4947e8a-39b1-428f-948b-4520cb1be239 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:42.980 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:43.255 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:43.255 { 00:25:43.255 "name": "f0ca6172-34eb-48b3-a8f7-0945a75dc891", 00:25:43.255 "aliases": [ 00:25:43.255 "lvs/nvme0n1p0" 00:25:43.255 ], 00:25:43.255 "product_name": "Logical Volume", 00:25:43.255 "block_size": 4096, 00:25:43.255 "num_blocks": 26476544, 00:25:43.255 "uuid": "f0ca6172-34eb-48b3-a8f7-0945a75dc891", 00:25:43.255 "assigned_rate_limits": { 00:25:43.255 "rw_ios_per_sec": 0, 00:25:43.255 "rw_mbytes_per_sec": 0, 00:25:43.255 "r_mbytes_per_sec": 0, 00:25:43.255 "w_mbytes_per_sec": 0 00:25:43.255 }, 00:25:43.255 "claimed": false, 00:25:43.255 "zoned": false, 00:25:43.255 "supported_io_types": { 00:25:43.255 "read": true, 00:25:43.255 "write": true, 00:25:43.255 "unmap": true, 00:25:43.255 "flush": false, 00:25:43.255 "reset": true, 00:25:43.255 "nvme_admin": false, 00:25:43.255 "nvme_io": false, 00:25:43.255 "nvme_io_md": false, 00:25:43.255 "write_zeroes": true, 00:25:43.255 "zcopy": false, 00:25:43.255 "get_zone_info": false, 00:25:43.255 "zone_management": false, 00:25:43.255 "zone_append": false, 00:25:43.255 "compare": false, 00:25:43.255 "compare_and_write": false, 00:25:43.255 "abort": false, 00:25:43.255 "seek_hole": true, 00:25:43.255 "seek_data": true, 00:25:43.255 "copy": false, 00:25:43.255 "nvme_iov_md": false 00:25:43.255 }, 00:25:43.255 "driver_specific": { 00:25:43.255 "lvol": { 00:25:43.255 "lvol_store_uuid": "f4947e8a-39b1-428f-948b-4520cb1be239", 00:25:43.255 "base_bdev": "nvme0n1", 00:25:43.255 "thin_provision": true, 00:25:43.255 "num_allocated_clusters": 0, 00:25:43.255 "snapshot": false, 00:25:43.255 "clone": false, 00:25:43.255 "esnap_clone": false 00:25:43.255 } 00:25:43.255 } 00:25:43.255 } 00:25:43.255 ]' 00:25:43.255 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:43.255 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:43.256 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:43.256 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:43.256 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:43.256 13:07:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:43.256 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:25:43.256 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:25:43.256 13:07:34 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:43.876 { 00:25:43.876 "name": "f0ca6172-34eb-48b3-a8f7-0945a75dc891", 00:25:43.876 "aliases": [ 00:25:43.876 "lvs/nvme0n1p0" 00:25:43.876 ], 00:25:43.876 "product_name": "Logical Volume", 00:25:43.876 "block_size": 4096, 00:25:43.876 "num_blocks": 26476544, 00:25:43.876 "uuid": "f0ca6172-34eb-48b3-a8f7-0945a75dc891", 00:25:43.876 "assigned_rate_limits": { 00:25:43.876 "rw_ios_per_sec": 0, 00:25:43.876 "rw_mbytes_per_sec": 0, 00:25:43.876 "r_mbytes_per_sec": 0, 00:25:43.876 "w_mbytes_per_sec": 0 00:25:43.876 }, 00:25:43.876 "claimed": false, 00:25:43.876 "zoned": false, 00:25:43.876 "supported_io_types": { 00:25:43.876 "read": true, 00:25:43.876 "write": true, 00:25:43.876 "unmap": true, 00:25:43.876 "flush": false, 00:25:43.876 "reset": true, 00:25:43.876 "nvme_admin": false, 00:25:43.876 "nvme_io": false, 00:25:43.876 "nvme_io_md": false, 00:25:43.876 "write_zeroes": true, 00:25:43.876 "zcopy": false, 00:25:43.876 "get_zone_info": false, 00:25:43.876 "zone_management": false, 00:25:43.876 "zone_append": false, 00:25:43.876 "compare": false, 00:25:43.876 "compare_and_write": false, 00:25:43.876 "abort": false, 00:25:43.876 "seek_hole": true, 00:25:43.876 "seek_data": true, 00:25:43.876 "copy": false, 00:25:43.876 "nvme_iov_md": false 00:25:43.876 }, 00:25:43.876 "driver_specific": { 00:25:43.876 "lvol": { 00:25:43.876 "lvol_store_uuid": "f4947e8a-39b1-428f-948b-4520cb1be239", 00:25:43.876 "base_bdev": "nvme0n1", 00:25:43.876 "thin_provision": true, 00:25:43.876 "num_allocated_clusters": 0, 00:25:43.876 "snapshot": false, 00:25:43.876 "clone": false, 00:25:43.876 "esnap_clone": false 00:25:43.876 } 00:25:43.876 } 00:25:43.876 } 00:25:43.876 ]' 00:25:43.876 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:44.134 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:44.134 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:44.134 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:44.134 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:44.134 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:44.134 13:07:35 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:25:44.134 13:07:35 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:44.392 13:07:35 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:44.392 13:07:35 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:44.392 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:44.392 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:44.392 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:44.392 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:44.392 13:07:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0ca6172-34eb-48b3-a8f7-0945a75dc891 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:44.651 { 00:25:44.651 "name": "f0ca6172-34eb-48b3-a8f7-0945a75dc891", 00:25:44.651 "aliases": [ 00:25:44.651 "lvs/nvme0n1p0" 00:25:44.651 ], 00:25:44.651 "product_name": "Logical Volume", 00:25:44.651 "block_size": 4096, 00:25:44.651 "num_blocks": 26476544, 00:25:44.651 "uuid": "f0ca6172-34eb-48b3-a8f7-0945a75dc891", 00:25:44.651 "assigned_rate_limits": { 00:25:44.651 "rw_ios_per_sec": 0, 00:25:44.651 "rw_mbytes_per_sec": 0, 00:25:44.651 "r_mbytes_per_sec": 0, 00:25:44.651 "w_mbytes_per_sec": 0 00:25:44.651 }, 00:25:44.651 "claimed": false, 00:25:44.651 "zoned": false, 00:25:44.651 "supported_io_types": { 00:25:44.651 "read": true, 00:25:44.651 "write": true, 00:25:44.651 "unmap": true, 00:25:44.651 "flush": false, 00:25:44.651 "reset": true, 00:25:44.651 "nvme_admin": false, 00:25:44.651 "nvme_io": false, 00:25:44.651 "nvme_io_md": false, 00:25:44.651 "write_zeroes": true, 00:25:44.651 "zcopy": false, 00:25:44.651 "get_zone_info": false, 00:25:44.651 "zone_management": false, 00:25:44.651 "zone_append": false, 00:25:44.651 "compare": false, 00:25:44.651 "compare_and_write": false, 00:25:44.651 "abort": false, 00:25:44.651 "seek_hole": true, 00:25:44.651 "seek_data": true, 00:25:44.651 "copy": false, 00:25:44.651 "nvme_iov_md": false 00:25:44.651 }, 00:25:44.651 "driver_specific": { 00:25:44.651 "lvol": { 00:25:44.651 "lvol_store_uuid": "f4947e8a-39b1-428f-948b-4520cb1be239", 00:25:44.651 "base_bdev": "nvme0n1", 00:25:44.651 "thin_provision": true, 00:25:44.651 "num_allocated_clusters": 0, 00:25:44.651 "snapshot": false, 00:25:44.651 "clone": false, 00:25:44.651 "esnap_clone": false 00:25:44.651 } 00:25:44.651 } 00:25:44.651 } 00:25:44.651 ]' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f0ca6172-34eb-48b3-a8f7-0945a75dc891 --l2p_dram_limit 10' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:25:44.651 13:07:36 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f0ca6172-34eb-48b3-a8f7-0945a75dc891 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:25:44.910 [2024-08-11 13:07:36.362394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.362471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:44.910 [2024-08-11 13:07:36.362498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:44.910 [2024-08-11 13:07:36.362512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.362640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.362664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:44.910 [2024-08-11 13:07:36.362680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:44.910 [2024-08-11 13:07:36.362692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.362754] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:44.910 [2024-08-11 13:07:36.363140] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:44.910 [2024-08-11 13:07:36.363173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.363194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:44.910 [2024-08-11 13:07:36.363211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:25:44.910 [2024-08-11 13:07:36.363223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.363434] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f745d491-457d-424d-b208-1b4fc2dfd6d9 00:25:44.910 [2024-08-11 13:07:36.364516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.364564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:44.910 [2024-08-11 13:07:36.364581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:44.910 [2024-08-11 13:07:36.364595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.369480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.369555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:44.910 [2024-08-11 13:07:36.369578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:25:44.910 [2024-08-11 13:07:36.369597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.369710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.369736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:44.910 [2024-08-11 13:07:36.369751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:44.910 [2024-08-11 13:07:36.369767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.369915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.369942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:44.910 [2024-08-11 13:07:36.369956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:44.910 [2024-08-11 13:07:36.369973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.370009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.910 [2024-08-11 13:07:36.371646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.371685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:44.910 [2024-08-11 13:07:36.371705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:25:44.910 [2024-08-11 13:07:36.371717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.371769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.371785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:44.910 [2024-08-11 13:07:36.371801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:44.910 [2024-08-11 13:07:36.371815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.371846] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:44.910 [2024-08-11 13:07:36.372074] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:44.910 [2024-08-11 13:07:36.372102] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:44.910 [2024-08-11 13:07:36.372118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:44.910 [2024-08-11 13:07:36.372143] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:44.910 [2024-08-11 13:07:36.372158] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:44.910 [2024-08-11 13:07:36.372174] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:44.910 [2024-08-11 13:07:36.372185] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:44.910 [2024-08-11 13:07:36.372199] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:44.910 [2024-08-11 13:07:36.372211] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:44.910 [2024-08-11 13:07:36.372226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.372239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:44.910 [2024-08-11 13:07:36.372263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:25:44.910 [2024-08-11 13:07:36.372275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.372376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.910 [2024-08-11 13:07:36.372394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:44.910 [2024-08-11 13:07:36.372412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:44.910 [2024-08-11 13:07:36.372433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.910 [2024-08-11 13:07:36.372545] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:44.910 [2024-08-11 13:07:36.372564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:44.910 [2024-08-11 13:07:36.372580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.910 [2024-08-11 13:07:36.372596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:44.911 [2024-08-11 13:07:36.372622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:44.911 [2024-08-11 13:07:36.372649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:44.911 [2024-08-11 13:07:36.372663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.911 [2024-08-11 13:07:36.372687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:44.911 [2024-08-11 13:07:36.372698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:44.911 [2024-08-11 13:07:36.372711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.911 [2024-08-11 13:07:36.372722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:44.911 [2024-08-11 13:07:36.372738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:44.911 [2024-08-11 13:07:36.372749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:44.911 [2024-08-11 13:07:36.372773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:44.911 [2024-08-11 13:07:36.372786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:44.911 [2024-08-11 13:07:36.372810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.911 [2024-08-11 13:07:36.372835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:44.911 [2024-08-11 13:07:36.372846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.911 [2024-08-11 13:07:36.372888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:44.911 [2024-08-11 13:07:36.372905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.911 [2024-08-11 13:07:36.372929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:44.911 [2024-08-11 13:07:36.372940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.911 [2024-08-11 13:07:36.372966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:44.911 [2024-08-11 13:07:36.372981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:44.911 [2024-08-11 13:07:36.372992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.911 [2024-08-11 13:07:36.373005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:44.911 [2024-08-11 13:07:36.373017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:44.911 [2024-08-11 13:07:36.373029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.911 [2024-08-11 13:07:36.373040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:44.911 [2024-08-11 13:07:36.373053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:44.911 [2024-08-11 13:07:36.373064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.911 [2024-08-11 13:07:36.373077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:44.911 [2024-08-11 13:07:36.373088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:44.911 [2024-08-11 13:07:36.373101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.911 [2024-08-11 13:07:36.373111] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:44.911 [2024-08-11 13:07:36.373126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:44.911 [2024-08-11 13:07:36.373140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.911 [2024-08-11 13:07:36.373156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.911 [2024-08-11 13:07:36.373168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:44.911 [2024-08-11 13:07:36.373195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:44.911 [2024-08-11 13:07:36.373207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:44.911 [2024-08-11 13:07:36.373220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:44.911 [2024-08-11 13:07:36.373231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:44.911 [2024-08-11 13:07:36.373244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:44.911 [2024-08-11 13:07:36.373265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:44.911 [2024-08-11 13:07:36.373283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.911 [2024-08-11 13:07:36.373304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:44.911 [2024-08-11 13:07:36.373320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:44.911 [2024-08-11 13:07:36.373333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:44.911 [2024-08-11 13:07:36.373347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:44.911 [2024-08-11 13:07:36.373359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:44.911 [2024-08-11 13:07:36.373373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:44.911 [2024-08-11 13:07:36.373385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:44.911 [2024-08-11 13:07:36.373402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:44.911 [2024-08-11 13:07:36.373413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:44.911 [2024-08-11 13:07:36.373427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:44.911 [2024-08-11 13:07:36.373439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:44.911 [2024-08-11 13:07:36.373454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:44.911 [2024-08-11 13:07:36.373465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:44.911 [2024-08-11 13:07:36.373480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:44.911 [2024-08-11 13:07:36.373492] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:44.911 [2024-08-11 13:07:36.373507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.911 [2024-08-11 13:07:36.373520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:44.911 [2024-08-11 13:07:36.373534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:44.911 [2024-08-11 13:07:36.373547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:44.911 [2024-08-11 13:07:36.373561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:44.911 [2024-08-11 13:07:36.373574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.911 [2024-08-11 13:07:36.373589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:44.911 [2024-08-11 13:07:36.373601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:25:44.911 [2024-08-11 13:07:36.373618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.911 [2024-08-11 13:07:36.373722] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:44.911 [2024-08-11 13:07:36.373747] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:46.812 [2024-08-11 13:07:38.331583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.812 [2024-08-11 13:07:38.331666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:46.812 [2024-08-11 13:07:38.331689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1957.872 ms 00:25:46.812 [2024-08-11 13:07:38.331705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.812 [2024-08-11 13:07:38.339900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.812 [2024-08-11 13:07:38.339985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.812 [2024-08-11 13:07:38.340007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.095 ms 00:25:46.812 [2024-08-11 13:07:38.340023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.812 [2024-08-11 13:07:38.340160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.812 [2024-08-11 13:07:38.340183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:46.812 [2024-08-11 13:07:38.340197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:46.812 [2024-08-11 13:07:38.340211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.812 [2024-08-11 13:07:38.348865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.812 [2024-08-11 13:07:38.348973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.812 [2024-08-11 13:07:38.348993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.560 ms 00:25:46.812 [2024-08-11 13:07:38.349019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.812 [2024-08-11 13:07:38.349079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.812 [2024-08-11 13:07:38.349099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.812 [2024-08-11 13:07:38.349113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.812 [2024-08-11 13:07:38.349126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.812 [2024-08-11 13:07:38.349493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.812 [2024-08-11 13:07:38.349517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.812 [2024-08-11 13:07:38.349534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:25:46.812 [2024-08-11 13:07:38.349548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.812 [2024-08-11 13:07:38.349707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.812 [2024-08-11 13:07:38.349731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.812 [2024-08-11 13:07:38.349745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:25:46.812 [2024-08-11 13:07:38.349759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.812 [2024-08-11 13:07:38.355624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.812 [2024-08-11 13:07:38.355699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.812 [2024-08-11 13:07:38.355731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.836 ms 00:25:46.813 [2024-08-11 13:07:38.355746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.813 [2024-08-11 13:07:38.365163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:46.813 [2024-08-11 13:07:38.367993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.813 [2024-08-11 13:07:38.368037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:46.813 [2024-08-11 13:07:38.368061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.065 ms 00:25:46.813 [2024-08-11 13:07:38.368077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.425782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.425865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:47.071 [2024-08-11 13:07:38.426256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.622 ms 00:25:47.071 [2024-08-11 13:07:38.426284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.426567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.426598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:47.071 [2024-08-11 13:07:38.426617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:25:47.071 [2024-08-11 13:07:38.426629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.430365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.430418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:47.071 [2024-08-11 13:07:38.430453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:25:47.071 [2024-08-11 13:07:38.430475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.433584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.433629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:47.071 [2024-08-11 13:07:38.433651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.051 ms 00:25:47.071 [2024-08-11 13:07:38.433664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.434049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.434087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:47.071 [2024-08-11 13:07:38.434117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:25:47.071 [2024-08-11 13:07:38.434137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.468234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.468318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:47.071 [2024-08-11 13:07:38.468346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.019 ms 00:25:47.071 [2024-08-11 13:07:38.468362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.473158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.473220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:47.071 [2024-08-11 13:07:38.473245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.721 ms 00:25:47.071 [2024-08-11 13:07:38.473261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.477412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.477477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:47.071 [2024-08-11 13:07:38.477502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:25:47.071 [2024-08-11 13:07:38.477516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.482064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.482128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:47.071 [2024-08-11 13:07:38.482153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.482 ms 00:25:47.071 [2024-08-11 13:07:38.482168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.071 [2024-08-11 13:07:38.482241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.071 [2024-08-11 13:07:38.482262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:47.071 [2024-08-11 13:07:38.482281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:47.072 [2024-08-11 13:07:38.482295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.072 [2024-08-11 13:07:38.482398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.072 [2024-08-11 13:07:38.482417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:47.072 [2024-08-11 13:07:38.482435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:47.072 [2024-08-11 13:07:38.482449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.072 [2024-08-11 13:07:38.483711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2120.734 ms, result 0 00:25:47.072 { 00:25:47.072 "name": "ftl0", 00:25:47.072 "uuid": "f745d491-457d-424d-b208-1b4fc2dfd6d9" 00:25:47.072 } 00:25:47.072 13:07:38 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:47.072 13:07:38 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:47.330 13:07:38 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:25:47.330 13:07:38 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:47.590 [2024-08-11 13:07:39.104974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.590 [2024-08-11 13:07:39.105050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:47.590 [2024-08-11 13:07:39.105073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:47.590 [2024-08-11 13:07:39.105087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.590 [2024-08-11 13:07:39.105124] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:47.590 [2024-08-11 13:07:39.105605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.590 [2024-08-11 13:07:39.105634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:47.590 [2024-08-11 13:07:39.105667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:25:47.590 [2024-08-11 13:07:39.105679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.590 [2024-08-11 13:07:39.105994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.590 [2024-08-11 13:07:39.106021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:47.590 [2024-08-11 13:07:39.106042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:25:47.590 [2024-08-11 13:07:39.106055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.590 [2024-08-11 13:07:39.109384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.590 [2024-08-11 13:07:39.109421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:47.590 [2024-08-11 13:07:39.109438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:25:47.590 [2024-08-11 13:07:39.109450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.116228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.591 [2024-08-11 13:07:39.116282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:47.591 [2024-08-11 13:07:39.116301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.740 ms 00:25:47.591 [2024-08-11 13:07:39.116313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.117959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.591 [2024-08-11 13:07:39.118002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:47.591 [2024-08-11 13:07:39.118024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.505 ms 00:25:47.591 [2024-08-11 13:07:39.118036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.122083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.591 [2024-08-11 13:07:39.122138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:47.591 [2024-08-11 13:07:39.122160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.993 ms 00:25:47.591 [2024-08-11 13:07:39.122173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.122327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.591 [2024-08-11 13:07:39.122347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:47.591 [2024-08-11 13:07:39.122364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:47.591 [2024-08-11 13:07:39.122376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.124056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.591 [2024-08-11 13:07:39.124097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:47.591 [2024-08-11 13:07:39.124116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:25:47.591 [2024-08-11 13:07:39.124128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.125540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.591 [2024-08-11 13:07:39.125581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:47.591 [2024-08-11 13:07:39.125603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:25:47.591 [2024-08-11 13:07:39.125615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.126750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.591 [2024-08-11 13:07:39.126791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:47.591 [2024-08-11 13:07:39.126809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:25:47.591 [2024-08-11 13:07:39.126820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.127920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.591 [2024-08-11 13:07:39.127970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:47.591 [2024-08-11 13:07:39.127990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:25:47.591 [2024-08-11 13:07:39.128001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.591 [2024-08-11 13:07:39.128107] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:47.591 [2024-08-11 13:07:39.128132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:47.591 [2024-08-11 13:07:39.128993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:47.592 [2024-08-11 13:07:39.129525] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:47.592 [2024-08-11 13:07:39.129540] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f745d491-457d-424d-b208-1b4fc2dfd6d9 00:25:47.592 [2024-08-11 13:07:39.129553] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:47.592 [2024-08-11 13:07:39.129567] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:47.592 [2024-08-11 13:07:39.129582] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:47.592 [2024-08-11 13:07:39.129596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:47.592 [2024-08-11 13:07:39.129607] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:47.592 [2024-08-11 13:07:39.129621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:47.592 [2024-08-11 13:07:39.129633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:47.592 [2024-08-11 13:07:39.129646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:47.592 [2024-08-11 13:07:39.129656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:47.592 [2024-08-11 13:07:39.129669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-08-11 13:07:39.129681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:47.592 [2024-08-11 13:07:39.129696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:25:47.592 [2024-08-11 13:07:39.129707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.131156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-08-11 13:07:39.131190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:47.592 [2024-08-11 13:07:39.131210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:25:47.592 [2024-08-11 13:07:39.131222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.131365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-08-11 13:07:39.131384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:47.592 [2024-08-11 13:07:39.131400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:47.592 [2024-08-11 13:07:39.131412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.136852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.136966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.592 [2024-08-11 13:07:39.136989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.137002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.137097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.137113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.592 [2024-08-11 13:07:39.137128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.137139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.137240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.137263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.592 [2024-08-11 13:07:39.137281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.137292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.137322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.137337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.592 [2024-08-11 13:07:39.137352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.137364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.146810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.146919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.592 [2024-08-11 13:07:39.146956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.146969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.153434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.153507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:47.592 [2024-08-11 13:07:39.153529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.153542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.153663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.153682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.592 [2024-08-11 13:07:39.153704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.153716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.153770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.153787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.592 [2024-08-11 13:07:39.153802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.153814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.153937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.153958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.592 [2024-08-11 13:07:39.153976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.592 [2024-08-11 13:07:39.153987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-08-11 13:07:39.154049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.592 [2024-08-11 13:07:39.154067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:47.592 [2024-08-11 13:07:39.154083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.593 [2024-08-11 13:07:39.154094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.593 [2024-08-11 13:07:39.154146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.593 [2024-08-11 13:07:39.154163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.593 [2024-08-11 13:07:39.154180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.593 [2024-08-11 13:07:39.154194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.593 [2024-08-11 13:07:39.154254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:47.593 [2024-08-11 13:07:39.154271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.593 [2024-08-11 13:07:39.154286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:47.593 [2024-08-11 13:07:39.154297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.593 [2024-08-11 13:07:39.154466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.441 ms, result 0 00:25:47.593 true 00:25:47.593 13:07:39 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 90536 00:25:47.593 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 90536 ']' 00:25:47.593 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 90536 00:25:47.593 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # uname 00:25:47.593 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:47.593 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90536 00:25:47.851 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:47.851 killing process with pid 90536 00:25:47.851 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:47.851 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90536' 00:25:47.851 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@965 -- # kill 90536 00:25:47.851 13:07:39 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # wait 90536 00:25:51.139 13:07:42 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:55.330 262144+0 records in 00:25:55.330 262144+0 records out 00:25:55.330 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.73674 s, 227 MB/s 00:25:55.330 13:07:46 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:57.865 13:07:49 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:57.865 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:25:57.865 [2024-08-11 13:07:49.123976] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:25:57.865 [2024-08-11 13:07:49.124122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90734 ] 00:25:57.865 [2024-08-11 13:07:49.264852] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.865 [2024-08-11 13:07:49.301408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.865 [2024-08-11 13:07:49.385325] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.865 [2024-08-11 13:07:49.385410] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:58.126 [2024-08-11 13:07:49.543364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.543439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:58.126 [2024-08-11 13:07:49.543459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:58.126 [2024-08-11 13:07:49.543471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.543556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.543575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:58.126 [2024-08-11 13:07:49.543588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:58.126 [2024-08-11 13:07:49.543599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.543631] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:58.126 [2024-08-11 13:07:49.544004] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:58.126 [2024-08-11 13:07:49.544039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.544051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:58.126 [2024-08-11 13:07:49.544063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:25:58.126 [2024-08-11 13:07:49.544078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.545253] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:58.126 [2024-08-11 13:07:49.547396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.547438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:58.126 [2024-08-11 13:07:49.547455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.149 ms 00:25:58.126 [2024-08-11 13:07:49.547468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.547538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.547558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:58.126 [2024-08-11 13:07:49.547571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:58.126 [2024-08-11 13:07:49.547582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.552173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.552272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:58.126 [2024-08-11 13:07:49.552301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.502 ms 00:25:58.126 [2024-08-11 13:07:49.552319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.552532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.552565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:58.126 [2024-08-11 13:07:49.552597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:25:58.126 [2024-08-11 13:07:49.552625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.552784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.552813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:58.126 [2024-08-11 13:07:49.552840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:58.126 [2024-08-11 13:07:49.552858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.552934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:58.126 [2024-08-11 13:07:49.554648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.554697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:58.126 [2024-08-11 13:07:49.554715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:25:58.126 [2024-08-11 13:07:49.554734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.554787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.554804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:58.126 [2024-08-11 13:07:49.554816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:58.126 [2024-08-11 13:07:49.554839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.554895] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:58.126 [2024-08-11 13:07:49.554933] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:58.126 [2024-08-11 13:07:49.554986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:58.126 [2024-08-11 13:07:49.555016] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:58.126 [2024-08-11 13:07:49.555126] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:58.126 [2024-08-11 13:07:49.555143] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:58.126 [2024-08-11 13:07:49.555158] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:58.126 [2024-08-11 13:07:49.555173] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:58.126 [2024-08-11 13:07:49.555196] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:58.126 [2024-08-11 13:07:49.555208] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:58.126 [2024-08-11 13:07:49.555219] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:58.126 [2024-08-11 13:07:49.555230] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:58.126 [2024-08-11 13:07:49.555244] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:58.126 [2024-08-11 13:07:49.555256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.555267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:58.126 [2024-08-11 13:07:49.555279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:25:58.126 [2024-08-11 13:07:49.555290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.555405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.126 [2024-08-11 13:07:49.555432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:58.126 [2024-08-11 13:07:49.555457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:58.126 [2024-08-11 13:07:49.555476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.126 [2024-08-11 13:07:49.555640] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:58.126 [2024-08-11 13:07:49.555671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:58.126 [2024-08-11 13:07:49.555692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:58.126 [2024-08-11 13:07:49.555711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.126 [2024-08-11 13:07:49.555728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:58.126 [2024-08-11 13:07:49.555745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:58.126 [2024-08-11 13:07:49.555763] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:58.126 [2024-08-11 13:07:49.555781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:58.126 [2024-08-11 13:07:49.555799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:58.126 [2024-08-11 13:07:49.555816] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:58.127 [2024-08-11 13:07:49.555832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:58.127 [2024-08-11 13:07:49.555849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:58.127 [2024-08-11 13:07:49.555899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:58.127 [2024-08-11 13:07:49.555915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:58.127 [2024-08-11 13:07:49.555932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:58.127 [2024-08-11 13:07:49.555943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.127 [2024-08-11 13:07:49.555953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:58.127 [2024-08-11 13:07:49.555963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:58.127 [2024-08-11 13:07:49.555973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.127 [2024-08-11 13:07:49.555985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:58.127 [2024-08-11 13:07:49.556014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:58.127 [2024-08-11 13:07:49.556030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:58.127 [2024-08-11 13:07:49.556048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:58.127 [2024-08-11 13:07:49.556066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:58.127 [2024-08-11 13:07:49.556081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:58.127 [2024-08-11 13:07:49.556092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:58.127 [2024-08-11 13:07:49.556103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:58.127 [2024-08-11 13:07:49.556112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:58.127 [2024-08-11 13:07:49.556123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:58.127 [2024-08-11 13:07:49.556133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:58.127 [2024-08-11 13:07:49.556149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:58.127 [2024-08-11 13:07:49.556160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:58.127 [2024-08-11 13:07:49.556171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:58.127 [2024-08-11 13:07:49.556181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:58.127 [2024-08-11 13:07:49.556191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:58.127 [2024-08-11 13:07:49.556202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:58.127 [2024-08-11 13:07:49.556211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:58.127 [2024-08-11 13:07:49.556221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:58.127 [2024-08-11 13:07:49.556232] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:58.127 [2024-08-11 13:07:49.556242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.127 [2024-08-11 13:07:49.556252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:58.127 [2024-08-11 13:07:49.556262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:58.127 [2024-08-11 13:07:49.556271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.127 [2024-08-11 13:07:49.556281] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:58.127 [2024-08-11 13:07:49.556302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:58.127 [2024-08-11 13:07:49.556323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:58.127 [2024-08-11 13:07:49.556337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.127 [2024-08-11 13:07:49.556349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:58.127 [2024-08-11 13:07:49.556359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:58.127 [2024-08-11 13:07:49.556369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:58.127 [2024-08-11 13:07:49.556380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:58.127 [2024-08-11 13:07:49.556390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:58.127 [2024-08-11 13:07:49.556400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:58.127 [2024-08-11 13:07:49.556412] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:58.127 [2024-08-11 13:07:49.556426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:58.127 [2024-08-11 13:07:49.556448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:58.127 [2024-08-11 13:07:49.556460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:58.127 [2024-08-11 13:07:49.556471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:58.127 [2024-08-11 13:07:49.556482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:58.127 [2024-08-11 13:07:49.556493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:58.127 [2024-08-11 13:07:49.556504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:58.127 [2024-08-11 13:07:49.556516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:58.127 [2024-08-11 13:07:49.556530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:58.127 [2024-08-11 13:07:49.556542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:58.127 [2024-08-11 13:07:49.556553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:58.127 [2024-08-11 13:07:49.556564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:58.127 [2024-08-11 13:07:49.556586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:58.127 [2024-08-11 13:07:49.556598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:58.127 [2024-08-11 13:07:49.556609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:58.127 [2024-08-11 13:07:49.556621] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:58.127 [2024-08-11 13:07:49.556637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:58.127 [2024-08-11 13:07:49.556649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:58.127 [2024-08-11 13:07:49.556661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:58.127 [2024-08-11 13:07:49.556672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:58.127 [2024-08-11 13:07:49.556683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:58.127 [2024-08-11 13:07:49.556696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.556729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:58.127 [2024-08-11 13:07:49.556747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:25:58.127 [2024-08-11 13:07:49.556762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.573922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.573999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:58.127 [2024-08-11 13:07:49.574025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.061 ms 00:25:58.127 [2024-08-11 13:07:49.574041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.574198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.574225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:58.127 [2024-08-11 13:07:49.574242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:58.127 [2024-08-11 13:07:49.574256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.583427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.583760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:58.127 [2024-08-11 13:07:49.583804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.054 ms 00:25:58.127 [2024-08-11 13:07:49.583820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.583974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.583999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:58.127 [2024-08-11 13:07:49.584016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:58.127 [2024-08-11 13:07:49.584030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.584435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.584454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:58.127 [2024-08-11 13:07:49.584479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:25:58.127 [2024-08-11 13:07:49.584502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.584664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.584687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:58.127 [2024-08-11 13:07:49.584708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:25:58.127 [2024-08-11 13:07:49.584718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.589371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.589431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:58.127 [2024-08-11 13:07:49.589450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.596 ms 00:25:58.127 [2024-08-11 13:07:49.589461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.591746] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:58.127 [2024-08-11 13:07:49.591793] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:58.127 [2024-08-11 13:07:49.591813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.127 [2024-08-11 13:07:49.591826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:58.127 [2024-08-11 13:07:49.591838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:25:58.127 [2024-08-11 13:07:49.591853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.127 [2024-08-11 13:07:49.607877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.607983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:58.128 [2024-08-11 13:07:49.608006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.913 ms 00:25:58.128 [2024-08-11 13:07:49.608020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.610299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.610480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:58.128 [2024-08-11 13:07:49.610508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:25:58.128 [2024-08-11 13:07:49.610521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.612250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.612290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:58.128 [2024-08-11 13:07:49.612306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:25:58.128 [2024-08-11 13:07:49.612317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.612726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.612776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:58.128 [2024-08-11 13:07:49.612792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:25:58.128 [2024-08-11 13:07:49.612808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.630525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.630604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:58.128 [2024-08-11 13:07:49.630626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.681 ms 00:25:58.128 [2024-08-11 13:07:49.630653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.639242] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:58.128 [2024-08-11 13:07:49.642301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.642342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:58.128 [2024-08-11 13:07:49.642361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.564 ms 00:25:58.128 [2024-08-11 13:07:49.642372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.642471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.642490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:58.128 [2024-08-11 13:07:49.642509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:58.128 [2024-08-11 13:07:49.642523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.642665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.642687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:58.128 [2024-08-11 13:07:49.642700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:58.128 [2024-08-11 13:07:49.642711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.642744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.642779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:58.128 [2024-08-11 13:07:49.642791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:58.128 [2024-08-11 13:07:49.642802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.642852] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:58.128 [2024-08-11 13:07:49.642893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.642908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:58.128 [2024-08-11 13:07:49.642934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:58.128 [2024-08-11 13:07:49.642945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.646337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.646390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:58.128 [2024-08-11 13:07:49.646408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.358 ms 00:25:58.128 [2024-08-11 13:07:49.646420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.646594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.128 [2024-08-11 13:07:49.646616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:58.128 [2024-08-11 13:07:49.646638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:58.128 [2024-08-11 13:07:49.646650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.128 [2024-08-11 13:07:49.647780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.939 ms, result 0 00:26:36.424  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (25 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 161/1024 [MB] (25 MBps) Copying: 187/1024 [MB] (25 MBps) Copying: 213/1024 [MB] (26 MBps) Copying: 240/1024 [MB] (26 MBps) Copying: 266/1024 [MB] (26 MBps) Copying: 293/1024 [MB] (26 MBps) Copying: 319/1024 [MB] (26 MBps) Copying: 346/1024 [MB] (26 MBps) Copying: 373/1024 [MB] (26 MBps) Copying: 399/1024 [MB] (26 MBps) Copying: 425/1024 [MB] (25 MBps) Copying: 452/1024 [MB] (27 MBps) Copying: 480/1024 [MB] (27 MBps) Copying: 507/1024 [MB] (27 MBps) Copying: 534/1024 [MB] (27 MBps) Copying: 561/1024 [MB] (26 MBps) Copying: 588/1024 [MB] (27 MBps) Copying: 616/1024 [MB] (27 MBps) Copying: 643/1024 [MB] (26 MBps) Copying: 669/1024 [MB] (26 MBps) Copying: 697/1024 [MB] (27 MBps) Copying: 724/1024 [MB] (26 MBps) Copying: 750/1024 [MB] (26 MBps) Copying: 777/1024 [MB] (26 MBps) Copying: 804/1024 [MB] (27 MBps) Copying: 831/1024 [MB] (27 MBps) Copying: 858/1024 [MB] (27 MBps) Copying: 885/1024 [MB] (26 MBps) Copying: 911/1024 [MB] (26 MBps) Copying: 938/1024 [MB] (27 MBps) Copying: 966/1024 [MB] (27 MBps) Copying: 993/1024 [MB] (26 MBps) Copying: 1020/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-08-11 13:08:27.772889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.424 [2024-08-11 13:08:27.772951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:36.424 [2024-08-11 13:08:27.772973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:36.424 [2024-08-11 13:08:27.773016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.424 [2024-08-11 13:08:27.773050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:36.424 [2024-08-11 13:08:27.773487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.424 [2024-08-11 13:08:27.773513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:36.424 [2024-08-11 13:08:27.773527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:26:36.424 [2024-08-11 13:08:27.773545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.424 [2024-08-11 13:08:27.774893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.424 [2024-08-11 13:08:27.774935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:36.424 [2024-08-11 13:08:27.774950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:26:36.424 [2024-08-11 13:08:27.774962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.424 [2024-08-11 13:08:27.774997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.424 [2024-08-11 13:08:27.775011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:36.424 [2024-08-11 13:08:27.775023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:36.424 [2024-08-11 13:08:27.775033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.424 [2024-08-11 13:08:27.775090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.424 [2024-08-11 13:08:27.775109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:36.424 [2024-08-11 13:08:27.775121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:36.424 [2024-08-11 13:08:27.775132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.424 [2024-08-11 13:08:27.775151] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:36.424 [2024-08-11 13:08:27.775168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:36.424 [2024-08-11 13:08:27.775183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:36.424 [2024-08-11 13:08:27.775195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:36.424 [2024-08-11 13:08:27.775206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:36.424 [2024-08-11 13:08:27.775217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:36.424 [2024-08-11 13:08:27.775229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:36.424 [2024-08-11 13:08:27.775240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.775995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:36.425 [2024-08-11 13:08:27.776335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:36.426 [2024-08-11 13:08:27.776346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:36.426 [2024-08-11 13:08:27.776358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:36.426 [2024-08-11 13:08:27.776378] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:36.426 [2024-08-11 13:08:27.776389] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f745d491-457d-424d-b208-1b4fc2dfd6d9 00:26:36.426 [2024-08-11 13:08:27.776400] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:36.426 [2024-08-11 13:08:27.776411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:36.426 [2024-08-11 13:08:27.776421] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:36.426 [2024-08-11 13:08:27.776432] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:36.426 [2024-08-11 13:08:27.776447] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:36.426 [2024-08-11 13:08:27.776458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:36.426 [2024-08-11 13:08:27.776468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:36.426 [2024-08-11 13:08:27.776479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:36.426 [2024-08-11 13:08:27.776489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:36.426 [2024-08-11 13:08:27.776500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.426 [2024-08-11 13:08:27.776524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:36.426 [2024-08-11 13:08:27.776536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:26:36.426 [2024-08-11 13:08:27.776547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.777806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.426 [2024-08-11 13:08:27.777829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:36.426 [2024-08-11 13:08:27.777849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:26:36.426 [2024-08-11 13:08:27.777860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.778337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.426 [2024-08-11 13:08:27.778392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:36.426 [2024-08-11 13:08:27.778616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:36.426 [2024-08-11 13:08:27.778664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.783241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.783285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:36.426 [2024-08-11 13:08:27.783300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.783311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.783377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.783392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:36.426 [2024-08-11 13:08:27.783410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.783433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.783484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.783503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:36.426 [2024-08-11 13:08:27.783521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.783532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.783554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.783567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:36.426 [2024-08-11 13:08:27.783578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.783589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.792324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.792399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:36.426 [2024-08-11 13:08:27.792428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.792440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.798783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.798853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:36.426 [2024-08-11 13:08:27.798888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.798902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.798979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.799014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:36.426 [2024-08-11 13:08:27.799026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.799043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.799074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.799087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:36.426 [2024-08-11 13:08:27.799099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.799109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.799179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.799197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:36.426 [2024-08-11 13:08:27.799209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.799220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.799270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.799289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:36.426 [2024-08-11 13:08:27.799312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.799323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.799367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.799381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:36.426 [2024-08-11 13:08:27.799394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.799404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.799461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:36.426 [2024-08-11 13:08:27.799484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:36.426 [2024-08-11 13:08:27.799496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:36.426 [2024-08-11 13:08:27.799508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.426 [2024-08-11 13:08:27.799660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 26.754 ms, result 0 00:26:37.362 00:26:37.362 00:26:37.362 13:08:28 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:37.362 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:26:37.362 [2024-08-11 13:08:28.869118] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:26:37.362 [2024-08-11 13:08:28.869288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91121 ] 00:26:37.649 [2024-08-11 13:08:29.017811] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.649 [2024-08-11 13:08:29.054262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.649 [2024-08-11 13:08:29.138090] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.649 [2024-08-11 13:08:29.138180] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.920 [2024-08-11 13:08:29.295738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.920 [2024-08-11 13:08:29.295812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:37.920 [2024-08-11 13:08:29.295834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:37.920 [2024-08-11 13:08:29.295847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.920 [2024-08-11 13:08:29.295967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.920 [2024-08-11 13:08:29.295991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.920 [2024-08-11 13:08:29.296005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:37.920 [2024-08-11 13:08:29.296016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.920 [2024-08-11 13:08:29.296049] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:37.920 [2024-08-11 13:08:29.296426] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:37.921 [2024-08-11 13:08:29.296459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.296472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.921 [2024-08-11 13:08:29.296485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:26:37.921 [2024-08-11 13:08:29.296500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.297003] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:37.921 [2024-08-11 13:08:29.297031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.297044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:37.921 [2024-08-11 13:08:29.297057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:37.921 [2024-08-11 13:08:29.297068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.297139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.297158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:37.921 [2024-08-11 13:08:29.297171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:37.921 [2024-08-11 13:08:29.297182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.297558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.297580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.921 [2024-08-11 13:08:29.297594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:26:37.921 [2024-08-11 13:08:29.297605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.297704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.297725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.921 [2024-08-11 13:08:29.297746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:37.921 [2024-08-11 13:08:29.297771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.297806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.297828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:37.921 [2024-08-11 13:08:29.297840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:37.921 [2024-08-11 13:08:29.297851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.297898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:37.921 [2024-08-11 13:08:29.299367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.299404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.921 [2024-08-11 13:08:29.299420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:26:37.921 [2024-08-11 13:08:29.299448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.299493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.299509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:37.921 [2024-08-11 13:08:29.299521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:37.921 [2024-08-11 13:08:29.299532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.299580] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:37.921 [2024-08-11 13:08:29.299621] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:37.921 [2024-08-11 13:08:29.299670] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:37.921 [2024-08-11 13:08:29.299702] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:37.921 [2024-08-11 13:08:29.299812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:37.921 [2024-08-11 13:08:29.299828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:37.921 [2024-08-11 13:08:29.299842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:37.921 [2024-08-11 13:08:29.299857] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:37.921 [2024-08-11 13:08:29.299908] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:37.921 [2024-08-11 13:08:29.299924] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:37.921 [2024-08-11 13:08:29.299935] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:37.921 [2024-08-11 13:08:29.299961] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:37.921 [2024-08-11 13:08:29.299972] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:37.921 [2024-08-11 13:08:29.299985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.300000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:37.921 [2024-08-11 13:08:29.300012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:26:37.921 [2024-08-11 13:08:29.300023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.300115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.921 [2024-08-11 13:08:29.300145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:37.921 [2024-08-11 13:08:29.300166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:37.921 [2024-08-11 13:08:29.300177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.921 [2024-08-11 13:08:29.300287] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:37.921 [2024-08-11 13:08:29.300304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:37.921 [2024-08-11 13:08:29.300332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.921 [2024-08-11 13:08:29.300345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:37.921 [2024-08-11 13:08:29.300370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:37.921 [2024-08-11 13:08:29.300392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:37.921 [2024-08-11 13:08:29.300403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.921 [2024-08-11 13:08:29.300424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:37.921 [2024-08-11 13:08:29.300435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:37.921 [2024-08-11 13:08:29.300445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.921 [2024-08-11 13:08:29.300455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:37.921 [2024-08-11 13:08:29.300466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:37.921 [2024-08-11 13:08:29.300476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:37.921 [2024-08-11 13:08:29.300498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:37.921 [2024-08-11 13:08:29.300508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:37.921 [2024-08-11 13:08:29.300529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.921 [2024-08-11 13:08:29.300553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:37.921 [2024-08-11 13:08:29.300563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.921 [2024-08-11 13:08:29.300584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:37.921 [2024-08-11 13:08:29.300594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.921 [2024-08-11 13:08:29.300614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:37.921 [2024-08-11 13:08:29.300625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300635] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.921 [2024-08-11 13:08:29.300645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:37.921 [2024-08-11 13:08:29.300655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.921 [2024-08-11 13:08:29.300676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:37.921 [2024-08-11 13:08:29.300687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:37.921 [2024-08-11 13:08:29.300697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.921 [2024-08-11 13:08:29.300713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:37.921 [2024-08-11 13:08:29.300724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:37.921 [2024-08-11 13:08:29.300735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:37.921 [2024-08-11 13:08:29.300755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:37.921 [2024-08-11 13:08:29.300766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300776] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:37.921 [2024-08-11 13:08:29.300787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:37.921 [2024-08-11 13:08:29.300798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.921 [2024-08-11 13:08:29.300808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.921 [2024-08-11 13:08:29.300819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:37.921 [2024-08-11 13:08:29.300831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:37.921 [2024-08-11 13:08:29.300841] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:37.921 [2024-08-11 13:08:29.300852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:37.921 [2024-08-11 13:08:29.300862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:37.922 [2024-08-11 13:08:29.300889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:37.922 [2024-08-11 13:08:29.300905] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:37.922 [2024-08-11 13:08:29.300920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.922 [2024-08-11 13:08:29.300933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:37.922 [2024-08-11 13:08:29.300944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:37.922 [2024-08-11 13:08:29.300956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:37.922 [2024-08-11 13:08:29.300968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:37.922 [2024-08-11 13:08:29.300979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:37.922 [2024-08-11 13:08:29.300990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:37.922 [2024-08-11 13:08:29.301001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:37.922 [2024-08-11 13:08:29.301013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:37.922 [2024-08-11 13:08:29.301028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:37.922 [2024-08-11 13:08:29.301040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:37.922 [2024-08-11 13:08:29.301051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:37.922 [2024-08-11 13:08:29.301062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:37.922 [2024-08-11 13:08:29.301073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:37.922 [2024-08-11 13:08:29.301096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:37.922 [2024-08-11 13:08:29.301111] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:37.922 [2024-08-11 13:08:29.301124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.922 [2024-08-11 13:08:29.301136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:37.922 [2024-08-11 13:08:29.301148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:37.922 [2024-08-11 13:08:29.301159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:37.922 [2024-08-11 13:08:29.301170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:37.922 [2024-08-11 13:08:29.301182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.301197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:37.922 [2024-08-11 13:08:29.301209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:26:37.922 [2024-08-11 13:08:29.301220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.317985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.318064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.922 [2024-08-11 13:08:29.318091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.680 ms 00:26:37.922 [2024-08-11 13:08:29.318129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.318297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.318336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:37.922 [2024-08-11 13:08:29.318361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:26:37.922 [2024-08-11 13:08:29.318377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.328471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.328754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.922 [2024-08-11 13:08:29.328784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.949 ms 00:26:37.922 [2024-08-11 13:08:29.328804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.328889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.328914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.922 [2024-08-11 13:08:29.328938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:37.922 [2024-08-11 13:08:29.328950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.329120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.329142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.922 [2024-08-11 13:08:29.329165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:26:37.922 [2024-08-11 13:08:29.329176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.329342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.329361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.922 [2024-08-11 13:08:29.329377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:26:37.922 [2024-08-11 13:08:29.329388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.333988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.334047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.922 [2024-08-11 13:08:29.334066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:26:37.922 [2024-08-11 13:08:29.334078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.334242] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:37.922 [2024-08-11 13:08:29.334269] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:37.922 [2024-08-11 13:08:29.334293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.334305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:37.922 [2024-08-11 13:08:29.334318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:37.922 [2024-08-11 13:08:29.334343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.348280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.348338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:37.922 [2024-08-11 13:08:29.348354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.914 ms 00:26:37.922 [2024-08-11 13:08:29.348365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.348504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.348535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:37.922 [2024-08-11 13:08:29.348552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:37.922 [2024-08-11 13:08:29.348563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.348650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.348669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:37.922 [2024-08-11 13:08:29.348682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:37.922 [2024-08-11 13:08:29.348693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.349086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.349107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:37.922 [2024-08-11 13:08:29.349123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:26:37.922 [2024-08-11 13:08:29.349134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.349157] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:37.922 [2024-08-11 13:08:29.349173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.349188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:37.922 [2024-08-11 13:08:29.349201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:37.922 [2024-08-11 13:08:29.349212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.358041] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:37.922 [2024-08-11 13:08:29.358285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.358305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:37.922 [2024-08-11 13:08:29.358320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.043 ms 00:26:37.922 [2024-08-11 13:08:29.358332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.360759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.360795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:37.922 [2024-08-11 13:08:29.360810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:26:37.922 [2024-08-11 13:08:29.360821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.360959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.360981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:37.922 [2024-08-11 13:08:29.360994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:37.922 [2024-08-11 13:08:29.361006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.361063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.361080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:37.922 [2024-08-11 13:08:29.361092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:37.922 [2024-08-11 13:08:29.361103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.922 [2024-08-11 13:08:29.361165] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:37.922 [2024-08-11 13:08:29.361186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.922 [2024-08-11 13:08:29.361208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:37.922 [2024-08-11 13:08:29.361221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:37.923 [2024-08-11 13:08:29.361235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.923 [2024-08-11 13:08:29.364792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.923 [2024-08-11 13:08:29.364839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:37.923 [2024-08-11 13:08:29.364857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.519 ms 00:26:37.923 [2024-08-11 13:08:29.364884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.923 [2024-08-11 13:08:29.364966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.923 [2024-08-11 13:08:29.365003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:37.923 [2024-08-11 13:08:29.365025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:37.923 [2024-08-11 13:08:29.365037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.923 [2024-08-11 13:08:29.366142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 69.913 ms, result 0 00:27:21.882  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 107/1024 [MB] (26 MBps) Copying: 131/1024 [MB] (24 MBps) Copying: 154/1024 [MB] (22 MBps) Copying: 179/1024 [MB] (24 MBps) Copying: 203/1024 [MB] (24 MBps) Copying: 228/1024 [MB] (25 MBps) Copying: 254/1024 [MB] (25 MBps) Copying: 279/1024 [MB] (25 MBps) Copying: 305/1024 [MB] (26 MBps) Copying: 328/1024 [MB] (23 MBps) Copying: 352/1024 [MB] (23 MBps) Copying: 375/1024 [MB] (23 MBps) Copying: 398/1024 [MB] (23 MBps) Copying: 420/1024 [MB] (22 MBps) Copying: 443/1024 [MB] (22 MBps) Copying: 465/1024 [MB] (22 MBps) Copying: 488/1024 [MB] (22 MBps) Copying: 511/1024 [MB] (23 MBps) Copying: 534/1024 [MB] (22 MBps) Copying: 556/1024 [MB] (22 MBps) Copying: 579/1024 [MB] (22 MBps) Copying: 601/1024 [MB] (21 MBps) Copying: 622/1024 [MB] (21 MBps) Copying: 645/1024 [MB] (22 MBps) Copying: 667/1024 [MB] (21 MBps) Copying: 689/1024 [MB] (22 MBps) Copying: 712/1024 [MB] (23 MBps) Copying: 736/1024 [MB] (23 MBps) Copying: 760/1024 [MB] (24 MBps) Copying: 783/1024 [MB] (23 MBps) Copying: 807/1024 [MB] (23 MBps) Copying: 830/1024 [MB] (23 MBps) Copying: 853/1024 [MB] (23 MBps) Copying: 877/1024 [MB] (23 MBps) Copying: 900/1024 [MB] (23 MBps) Copying: 924/1024 [MB] (23 MBps) Copying: 947/1024 [MB] (23 MBps) Copying: 970/1024 [MB] (22 MBps) Copying: 993/1024 [MB] (23 MBps) Copying: 1016/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-08-11 13:09:13.223781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.882 [2024-08-11 13:09:13.224153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:21.882 [2024-08-11 13:09:13.224193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.882 [2024-08-11 13:09:13.224210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.882 [2024-08-11 13:09:13.224558] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:21.882 [2024-08-11 13:09:13.225082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.882 [2024-08-11 13:09:13.225108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:21.882 [2024-08-11 13:09:13.225125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:27:21.882 [2024-08-11 13:09:13.225139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.882 [2024-08-11 13:09:13.225444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.882 [2024-08-11 13:09:13.225475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:21.882 [2024-08-11 13:09:13.225491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:27:21.882 [2024-08-11 13:09:13.225520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.882 [2024-08-11 13:09:13.225566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.882 [2024-08-11 13:09:13.225584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:21.882 [2024-08-11 13:09:13.225600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:21.882 [2024-08-11 13:09:13.225615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.882 [2024-08-11 13:09:13.225704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.882 [2024-08-11 13:09:13.225724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:21.882 [2024-08-11 13:09:13.225746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:21.882 [2024-08-11 13:09:13.225770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.882 [2024-08-11 13:09:13.225796] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:21.882 [2024-08-11 13:09:13.225818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.225998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:21.882 [2024-08-11 13:09:13.226503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.226990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:21.883 [2024-08-11 13:09:13.227773] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:21.883 [2024-08-11 13:09:13.227789] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f745d491-457d-424d-b208-1b4fc2dfd6d9 00:27:21.883 [2024-08-11 13:09:13.227803] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:21.883 [2024-08-11 13:09:13.227817] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:21.883 [2024-08-11 13:09:13.227851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:21.883 [2024-08-11 13:09:13.227881] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:21.883 [2024-08-11 13:09:13.227935] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:21.883 [2024-08-11 13:09:13.227955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:21.883 [2024-08-11 13:09:13.227974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:21.883 [2024-08-11 13:09:13.227987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:21.883 [2024-08-11 13:09:13.228000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:21.883 [2024-08-11 13:09:13.228015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.883 [2024-08-11 13:09:13.228029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:21.883 [2024-08-11 13:09:13.228045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:27:21.883 [2024-08-11 13:09:13.228058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.883 [2024-08-11 13:09:13.229714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.883 [2024-08-11 13:09:13.229778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:21.883 [2024-08-11 13:09:13.229796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:27:21.883 [2024-08-11 13:09:13.229817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.883 [2024-08-11 13:09:13.229929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.883 [2024-08-11 13:09:13.229951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:21.883 [2024-08-11 13:09:13.229967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:21.883 [2024-08-11 13:09:13.229980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.883 [2024-08-11 13:09:13.235110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.883 [2024-08-11 13:09:13.235153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:21.883 [2024-08-11 13:09:13.235177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.883 [2024-08-11 13:09:13.235191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.883 [2024-08-11 13:09:13.235263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.883 [2024-08-11 13:09:13.235282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:21.883 [2024-08-11 13:09:13.235297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.883 [2024-08-11 13:09:13.235311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.883 [2024-08-11 13:09:13.235403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.883 [2024-08-11 13:09:13.235428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:21.883 [2024-08-11 13:09:13.235444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.883 [2024-08-11 13:09:13.235464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.883 [2024-08-11 13:09:13.235492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.883 [2024-08-11 13:09:13.235510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:21.883 [2024-08-11 13:09:13.235524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.883 [2024-08-11 13:09:13.235538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.883 [2024-08-11 13:09:13.243484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.883 [2024-08-11 13:09:13.243557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:21.883 [2024-08-11 13:09:13.243580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.883 [2024-08-11 13:09:13.243591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.884 [2024-08-11 13:09:13.250861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.884 [2024-08-11 13:09:13.250931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:21.884 [2024-08-11 13:09:13.250946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.884 [2024-08-11 13:09:13.250956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.884 [2024-08-11 13:09:13.251001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.884 [2024-08-11 13:09:13.251024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:21.884 [2024-08-11 13:09:13.251035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.884 [2024-08-11 13:09:13.251045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.884 [2024-08-11 13:09:13.251102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.884 [2024-08-11 13:09:13.251115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:21.884 [2024-08-11 13:09:13.251126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.884 [2024-08-11 13:09:13.251135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.884 [2024-08-11 13:09:13.251211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.884 [2024-08-11 13:09:13.251243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:21.884 [2024-08-11 13:09:13.251254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.884 [2024-08-11 13:09:13.251264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.884 [2024-08-11 13:09:13.251301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.884 [2024-08-11 13:09:13.251323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:21.884 [2024-08-11 13:09:13.251335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.884 [2024-08-11 13:09:13.251345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.884 [2024-08-11 13:09:13.251386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.884 [2024-08-11 13:09:13.251400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:21.884 [2024-08-11 13:09:13.251410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.884 [2024-08-11 13:09:13.251421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.884 [2024-08-11 13:09:13.251474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:21.884 [2024-08-11 13:09:13.251490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:21.884 [2024-08-11 13:09:13.251501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:21.884 [2024-08-11 13:09:13.251511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.884 [2024-08-11 13:09:13.251645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.843 ms, result 0 00:27:21.884 00:27:21.884 00:27:21.884 13:09:13 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:23.786 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:23.786 13:09:15 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:23.786 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:27:23.786 [2024-08-11 13:09:15.376203] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:27:23.786 [2024-08-11 13:09:15.376394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91574 ] 00:27:24.045 [2024-08-11 13:09:15.526477] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.045 [2024-08-11 13:09:15.569230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.306 [2024-08-11 13:09:15.662904] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:24.306 [2024-08-11 13:09:15.663020] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:24.306 [2024-08-11 13:09:15.818504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.818574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:24.306 [2024-08-11 13:09:15.818607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:24.306 [2024-08-11 13:09:15.818618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.818681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.818698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:24.306 [2024-08-11 13:09:15.818709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:24.306 [2024-08-11 13:09:15.818719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.818747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:24.306 [2024-08-11 13:09:15.819054] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:24.306 [2024-08-11 13:09:15.819085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.819097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:24.306 [2024-08-11 13:09:15.819107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:27:24.306 [2024-08-11 13:09:15.819128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.819558] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:24.306 [2024-08-11 13:09:15.819594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.819606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:24.306 [2024-08-11 13:09:15.819617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:24.306 [2024-08-11 13:09:15.819627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.819678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.819694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:24.306 [2024-08-11 13:09:15.819704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:24.306 [2024-08-11 13:09:15.819714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.820173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.820208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:24.306 [2024-08-11 13:09:15.820244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:27:24.306 [2024-08-11 13:09:15.820255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.820358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.820390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:24.306 [2024-08-11 13:09:15.820403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:24.306 [2024-08-11 13:09:15.820426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.820460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.820479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:24.306 [2024-08-11 13:09:15.820489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:24.306 [2024-08-11 13:09:15.820499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.820541] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:24.306 [2024-08-11 13:09:15.821942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.821978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:24.306 [2024-08-11 13:09:15.821991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:27:24.306 [2024-08-11 13:09:15.822006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.822045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.822059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:24.306 [2024-08-11 13:09:15.822069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:24.306 [2024-08-11 13:09:15.822078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.822103] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:24.306 [2024-08-11 13:09:15.822169] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:24.306 [2024-08-11 13:09:15.822212] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:24.306 [2024-08-11 13:09:15.822236] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:24.306 [2024-08-11 13:09:15.822330] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:24.306 [2024-08-11 13:09:15.822344] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:24.306 [2024-08-11 13:09:15.822356] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:24.306 [2024-08-11 13:09:15.822369] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:24.306 [2024-08-11 13:09:15.822390] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:24.306 [2024-08-11 13:09:15.822401] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:24.306 [2024-08-11 13:09:15.822410] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:24.306 [2024-08-11 13:09:15.822423] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:24.306 [2024-08-11 13:09:15.822432] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:24.306 [2024-08-11 13:09:15.822443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.822456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:24.306 [2024-08-11 13:09:15.822466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:27:24.306 [2024-08-11 13:09:15.822475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.822553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.306 [2024-08-11 13:09:15.822566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:24.306 [2024-08-11 13:09:15.822586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:24.306 [2024-08-11 13:09:15.822596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.306 [2024-08-11 13:09:15.822732] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:24.306 [2024-08-11 13:09:15.822759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:24.306 [2024-08-11 13:09:15.822778] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:24.306 [2024-08-11 13:09:15.822789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.306 [2024-08-11 13:09:15.822799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:24.306 [2024-08-11 13:09:15.822810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:24.306 [2024-08-11 13:09:15.822820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:24.306 [2024-08-11 13:09:15.822829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:24.306 [2024-08-11 13:09:15.822839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:24.306 [2024-08-11 13:09:15.822848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:24.307 [2024-08-11 13:09:15.822857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:24.307 [2024-08-11 13:09:15.822866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:24.307 [2024-08-11 13:09:15.822891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:24.307 [2024-08-11 13:09:15.822901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:24.307 [2024-08-11 13:09:15.822910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:24.307 [2024-08-11 13:09:15.822919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.307 [2024-08-11 13:09:15.822942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:24.307 [2024-08-11 13:09:15.822954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:24.307 [2024-08-11 13:09:15.822978] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.307 [2024-08-11 13:09:15.822988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:24.307 [2024-08-11 13:09:15.822996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:24.307 [2024-08-11 13:09:15.823008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.307 [2024-08-11 13:09:15.823018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:24.307 [2024-08-11 13:09:15.823027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:24.307 [2024-08-11 13:09:15.823036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.307 [2024-08-11 13:09:15.823045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:24.307 [2024-08-11 13:09:15.823053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:24.307 [2024-08-11 13:09:15.823062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.307 [2024-08-11 13:09:15.823071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:24.307 [2024-08-11 13:09:15.823080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:24.307 [2024-08-11 13:09:15.823088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:24.307 [2024-08-11 13:09:15.823097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:24.307 [2024-08-11 13:09:15.823106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:24.307 [2024-08-11 13:09:15.823115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:24.307 [2024-08-11 13:09:15.823124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:24.307 [2024-08-11 13:09:15.823133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:24.307 [2024-08-11 13:09:15.823141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:24.307 [2024-08-11 13:09:15.823155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:24.307 [2024-08-11 13:09:15.823165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:24.307 [2024-08-11 13:09:15.823174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.307 [2024-08-11 13:09:15.823183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:24.307 [2024-08-11 13:09:15.823192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:24.307 [2024-08-11 13:09:15.823201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.307 [2024-08-11 13:09:15.823209] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:24.307 [2024-08-11 13:09:15.823228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:24.307 [2024-08-11 13:09:15.823238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:24.307 [2024-08-11 13:09:15.823255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:24.307 [2024-08-11 13:09:15.823265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:24.307 [2024-08-11 13:09:15.823274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:24.307 [2024-08-11 13:09:15.823283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:24.307 [2024-08-11 13:09:15.823293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:24.307 [2024-08-11 13:09:15.823302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:24.307 [2024-08-11 13:09:15.823311] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:24.307 [2024-08-11 13:09:15.823325] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:24.307 [2024-08-11 13:09:15.823337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:24.307 [2024-08-11 13:09:15.823355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:24.307 [2024-08-11 13:09:15.823367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:24.307 [2024-08-11 13:09:15.823377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:24.307 [2024-08-11 13:09:15.823387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:24.307 [2024-08-11 13:09:15.823396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:24.307 [2024-08-11 13:09:15.823406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:24.307 [2024-08-11 13:09:15.823416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:24.307 [2024-08-11 13:09:15.823426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:24.307 [2024-08-11 13:09:15.823436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:24.307 [2024-08-11 13:09:15.823445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:24.307 [2024-08-11 13:09:15.823455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:24.307 [2024-08-11 13:09:15.823465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:24.307 [2024-08-11 13:09:15.823475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:24.307 [2024-08-11 13:09:15.823495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:24.307 [2024-08-11 13:09:15.823507] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:24.307 [2024-08-11 13:09:15.823519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:24.307 [2024-08-11 13:09:15.823529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:24.307 [2024-08-11 13:09:15.823539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:24.307 [2024-08-11 13:09:15.823549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:24.307 [2024-08-11 13:09:15.823559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:24.307 [2024-08-11 13:09:15.823569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.823583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:24.307 [2024-08-11 13:09:15.823593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:27:24.307 [2024-08-11 13:09:15.823602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.307 [2024-08-11 13:09:15.841044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.841117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:24.307 [2024-08-11 13:09:15.841153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.388 ms 00:27:24.307 [2024-08-11 13:09:15.841168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.307 [2024-08-11 13:09:15.841308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.841330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:24.307 [2024-08-11 13:09:15.841365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:27:24.307 [2024-08-11 13:09:15.841391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.307 [2024-08-11 13:09:15.851204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.851270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:24.307 [2024-08-11 13:09:15.851295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.707 ms 00:27:24.307 [2024-08-11 13:09:15.851317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.307 [2024-08-11 13:09:15.851391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.851421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:24.307 [2024-08-11 13:09:15.851437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:24.307 [2024-08-11 13:09:15.851464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.307 [2024-08-11 13:09:15.851670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.851709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:24.307 [2024-08-11 13:09:15.851728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:27:24.307 [2024-08-11 13:09:15.851743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.307 [2024-08-11 13:09:15.851998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.852027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:24.307 [2024-08-11 13:09:15.852052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:27:24.307 [2024-08-11 13:09:15.852071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.307 [2024-08-11 13:09:15.856609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.856665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:24.307 [2024-08-11 13:09:15.856696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:27:24.307 [2024-08-11 13:09:15.856706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.307 [2024-08-11 13:09:15.856843] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:24.307 [2024-08-11 13:09:15.856867] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:24.307 [2024-08-11 13:09:15.856919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.307 [2024-08-11 13:09:15.856931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:24.307 [2024-08-11 13:09:15.856942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:27:24.308 [2024-08-11 13:09:15.856953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.867798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.867842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:24.308 [2024-08-11 13:09:15.867880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.816 ms 00:27:24.308 [2024-08-11 13:09:15.867922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.868030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.868050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:24.308 [2024-08-11 13:09:15.868064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:24.308 [2024-08-11 13:09:15.868075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.868165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.868183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:24.308 [2024-08-11 13:09:15.868194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:24.308 [2024-08-11 13:09:15.868215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.868554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.868580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:24.308 [2024-08-11 13:09:15.868596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:27:24.308 [2024-08-11 13:09:15.868606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.868631] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:24.308 [2024-08-11 13:09:15.868646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.868660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:24.308 [2024-08-11 13:09:15.868671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:24.308 [2024-08-11 13:09:15.868681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.876305] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:24.308 [2024-08-11 13:09:15.876517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.876543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:24.308 [2024-08-11 13:09:15.876554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.811 ms 00:27:24.308 [2024-08-11 13:09:15.876564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.878633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.878682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:24.308 [2024-08-11 13:09:15.878711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.031 ms 00:27:24.308 [2024-08-11 13:09:15.878720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.878802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.878827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:24.308 [2024-08-11 13:09:15.878838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:24.308 [2024-08-11 13:09:15.878848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.878925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.878954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:24.308 [2024-08-11 13:09:15.878965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:24.308 [2024-08-11 13:09:15.878975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.879019] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:24.308 [2024-08-11 13:09:15.879054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.879074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:24.308 [2024-08-11 13:09:15.879085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:24.308 [2024-08-11 13:09:15.879098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.882745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.882800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:24.308 [2024-08-11 13:09:15.882831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:27:24.308 [2024-08-11 13:09:15.882841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.882938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.308 [2024-08-11 13:09:15.882971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:24.308 [2024-08-11 13:09:15.882983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:24.308 [2024-08-11 13:09:15.882992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.308 [2024-08-11 13:09:15.884352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 65.243 ms, result 0 00:28:08.909  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (22 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 139/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (23 MBps) Copying: 186/1024 [MB] (24 MBps) Copying: 210/1024 [MB] (23 MBps) Copying: 234/1024 [MB] (24 MBps) Copying: 257/1024 [MB] (23 MBps) Copying: 281/1024 [MB] (23 MBps) Copying: 304/1024 [MB] (23 MBps) Copying: 328/1024 [MB] (23 MBps) Copying: 351/1024 [MB] (23 MBps) Copying: 374/1024 [MB] (22 MBps) Copying: 398/1024 [MB] (23 MBps) Copying: 421/1024 [MB] (23 MBps) Copying: 444/1024 [MB] (23 MBps) Copying: 468/1024 [MB] (23 MBps) Copying: 491/1024 [MB] (23 MBps) Copying: 515/1024 [MB] (23 MBps) Copying: 538/1024 [MB] (23 MBps) Copying: 562/1024 [MB] (23 MBps) Copying: 585/1024 [MB] (23 MBps) Copying: 608/1024 [MB] (22 MBps) Copying: 631/1024 [MB] (23 MBps) Copying: 655/1024 [MB] (23 MBps) Copying: 679/1024 [MB] (24 MBps) Copying: 703/1024 [MB] (24 MBps) Copying: 728/1024 [MB] (24 MBps) Copying: 751/1024 [MB] (23 MBps) Copying: 775/1024 [MB] (23 MBps) Copying: 799/1024 [MB] (24 MBps) Copying: 823/1024 [MB] (24 MBps) Copying: 847/1024 [MB] (23 MBps) Copying: 872/1024 [MB] (24 MBps) Copying: 896/1024 [MB] (24 MBps) Copying: 920/1024 [MB] (24 MBps) Copying: 944/1024 [MB] (23 MBps) Copying: 968/1024 [MB] (24 MBps) Copying: 992/1024 [MB] (24 MBps) Copying: 1016/1024 [MB] (24 MBps) Copying: 1048240/1048576 [kB] (6904 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-08-11 13:10:00.287182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.909 [2024-08-11 13:10:00.287444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:08.909 [2024-08-11 13:10:00.287474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:08.909 [2024-08-11 13:10:00.287486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.909 [2024-08-11 13:10:00.289062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:08.909 [2024-08-11 13:10:00.291894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.909 [2024-08-11 13:10:00.292010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:08.909 [2024-08-11 13:10:00.292055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:28:08.909 [2024-08-11 13:10:00.292066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.909 [2024-08-11 13:10:00.302313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.909 [2024-08-11 13:10:00.302351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:08.909 [2024-08-11 13:10:00.302393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.792 ms 00:28:08.909 [2024-08-11 13:10:00.302403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.909 [2024-08-11 13:10:00.302434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.909 [2024-08-11 13:10:00.302452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:08.909 [2024-08-11 13:10:00.302463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:08.909 [2024-08-11 13:10:00.302473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.909 [2024-08-11 13:10:00.302520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.909 [2024-08-11 13:10:00.302533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:08.909 [2024-08-11 13:10:00.302542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:08.909 [2024-08-11 13:10:00.302551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.909 [2024-08-11 13:10:00.302567] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:08.909 [2024-08-11 13:10:00.302590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130816 / 261120 wr_cnt: 1 state: open 00:28:08.909 [2024-08-11 13:10:00.302618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.302993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.303003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.303014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.303023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.303033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:08.909 [2024-08-11 13:10:00.303043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:08.910 [2024-08-11 13:10:00.303679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:08.910 [2024-08-11 13:10:00.303694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f745d491-457d-424d-b208-1b4fc2dfd6d9 00:28:08.910 [2024-08-11 13:10:00.303704] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130816 00:28:08.910 [2024-08-11 13:10:00.303712] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130848 00:28:08.910 [2024-08-11 13:10:00.303722] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130816 00:28:08.910 [2024-08-11 13:10:00.303732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:28:08.910 [2024-08-11 13:10:00.303741] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:08.910 [2024-08-11 13:10:00.303751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:08.910 [2024-08-11 13:10:00.303760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:08.910 [2024-08-11 13:10:00.303769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:08.910 [2024-08-11 13:10:00.303777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:08.910 [2024-08-11 13:10:00.303786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.910 [2024-08-11 13:10:00.303796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:08.910 [2024-08-11 13:10:00.303805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:28:08.910 [2024-08-11 13:10:00.303814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.910 [2024-08-11 13:10:00.305188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.910 [2024-08-11 13:10:00.305223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:08.910 [2024-08-11 13:10:00.305235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:28:08.910 [2024-08-11 13:10:00.305267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.910 [2024-08-11 13:10:00.305337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.910 [2024-08-11 13:10:00.305354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:08.910 [2024-08-11 13:10:00.305369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:08.910 [2024-08-11 13:10:00.305378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.910 [2024-08-11 13:10:00.309330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.910 [2024-08-11 13:10:00.309377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:08.910 [2024-08-11 13:10:00.309390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.910 [2024-08-11 13:10:00.309399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.910 [2024-08-11 13:10:00.309446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.910 [2024-08-11 13:10:00.309459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:08.910 [2024-08-11 13:10:00.309473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.910 [2024-08-11 13:10:00.309482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.910 [2024-08-11 13:10:00.309526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.910 [2024-08-11 13:10:00.309542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:08.910 [2024-08-11 13:10:00.309560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.910 [2024-08-11 13:10:00.309569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.910 [2024-08-11 13:10:00.309587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.910 [2024-08-11 13:10:00.309615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:08.910 [2024-08-11 13:10:00.309624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.910 [2024-08-11 13:10:00.309638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.910 [2024-08-11 13:10:00.316679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.911 [2024-08-11 13:10:00.316726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:08.911 [2024-08-11 13:10:00.316755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.911 [2024-08-11 13:10:00.316774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.911 [2024-08-11 13:10:00.323517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.911 [2024-08-11 13:10:00.323562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:08.911 [2024-08-11 13:10:00.323599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.911 [2024-08-11 13:10:00.323608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.911 [2024-08-11 13:10:00.323666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.911 [2024-08-11 13:10:00.323691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:08.911 [2024-08-11 13:10:00.323701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.911 [2024-08-11 13:10:00.323711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.911 [2024-08-11 13:10:00.323737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.911 [2024-08-11 13:10:00.323748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:08.911 [2024-08-11 13:10:00.323757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.911 [2024-08-11 13:10:00.323767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.911 [2024-08-11 13:10:00.323865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.911 [2024-08-11 13:10:00.323898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:08.911 [2024-08-11 13:10:00.323961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.911 [2024-08-11 13:10:00.323976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.911 [2024-08-11 13:10:00.324029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.911 [2024-08-11 13:10:00.324046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:08.911 [2024-08-11 13:10:00.324058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.911 [2024-08-11 13:10:00.324069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.911 [2024-08-11 13:10:00.324117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.911 [2024-08-11 13:10:00.324132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:08.911 [2024-08-11 13:10:00.324153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.911 [2024-08-11 13:10:00.324164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.911 [2024-08-11 13:10:00.324214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:08.911 [2024-08-11 13:10:00.324232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:08.911 [2024-08-11 13:10:00.324243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:08.911 [2024-08-11 13:10:00.324254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.911 [2024-08-11 13:10:00.324443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.949 ms, result 0 00:28:09.479 00:28:09.479 00:28:09.479 13:10:01 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:09.738 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:28:09.738 [2024-08-11 13:10:01.140185] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:28:09.738 [2024-08-11 13:10:01.140353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92019 ] 00:28:09.738 [2024-08-11 13:10:01.286484] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.738 [2024-08-11 13:10:01.322846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.998 [2024-08-11 13:10:01.403351] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:09.998 [2024-08-11 13:10:01.403466] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:09.998 [2024-08-11 13:10:01.558198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.558242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:09.998 [2024-08-11 13:10:01.558274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:09.998 [2024-08-11 13:10:01.558284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.558354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.558371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:09.998 [2024-08-11 13:10:01.558382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:09.998 [2024-08-11 13:10:01.558391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.558417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:09.998 [2024-08-11 13:10:01.558708] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:09.998 [2024-08-11 13:10:01.558733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.558743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:09.998 [2024-08-11 13:10:01.558754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:28:09.998 [2024-08-11 13:10:01.558763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.559216] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:09.998 [2024-08-11 13:10:01.559266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.559291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:09.998 [2024-08-11 13:10:01.559302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:09.998 [2024-08-11 13:10:01.559313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.559365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.559381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:09.998 [2024-08-11 13:10:01.559392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:09.998 [2024-08-11 13:10:01.559412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.559746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.559780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:09.998 [2024-08-11 13:10:01.559792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:28:09.998 [2024-08-11 13:10:01.559805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.559934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.559979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:09.998 [2024-08-11 13:10:01.559992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:28:09.998 [2024-08-11 13:10:01.560007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.560039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.560060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:09.998 [2024-08-11 13:10:01.560073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:09.998 [2024-08-11 13:10:01.560092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.560120] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:09.998 [2024-08-11 13:10:01.561442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.561492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:09.998 [2024-08-11 13:10:01.561505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.327 ms 00:28:09.998 [2024-08-11 13:10:01.561515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.561560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.561574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:09.998 [2024-08-11 13:10:01.561585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:09.998 [2024-08-11 13:10:01.561594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.561618] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:09.998 [2024-08-11 13:10:01.561642] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:09.998 [2024-08-11 13:10:01.561684] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:09.998 [2024-08-11 13:10:01.561724] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:09.998 [2024-08-11 13:10:01.561817] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:09.998 [2024-08-11 13:10:01.561832] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:09.998 [2024-08-11 13:10:01.561844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:09.998 [2024-08-11 13:10:01.561857] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:09.998 [2024-08-11 13:10:01.561913] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:09.998 [2024-08-11 13:10:01.561927] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:09.998 [2024-08-11 13:10:01.561937] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:09.998 [2024-08-11 13:10:01.561946] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:09.998 [2024-08-11 13:10:01.561960] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:09.998 [2024-08-11 13:10:01.561972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.561985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:09.998 [2024-08-11 13:10:01.561996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:28:09.998 [2024-08-11 13:10:01.562005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.562087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.998 [2024-08-11 13:10:01.562101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:09.998 [2024-08-11 13:10:01.562111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:09.998 [2024-08-11 13:10:01.562121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.998 [2024-08-11 13:10:01.562260] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:09.999 [2024-08-11 13:10:01.562290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:09.999 [2024-08-11 13:10:01.562319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:09.999 [2024-08-11 13:10:01.562348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562360] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:09.999 [2024-08-11 13:10:01.562379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:09.999 [2024-08-11 13:10:01.562397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:09.999 [2024-08-11 13:10:01.562406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:09.999 [2024-08-11 13:10:01.562415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:09.999 [2024-08-11 13:10:01.562423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:09.999 [2024-08-11 13:10:01.562432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:09.999 [2024-08-11 13:10:01.562441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:09.999 [2024-08-11 13:10:01.562460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:09.999 [2024-08-11 13:10:01.562486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:09.999 [2024-08-11 13:10:01.562516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:09.999 [2024-08-11 13:10:01.562542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:09.999 [2024-08-11 13:10:01.562568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:09.999 [2024-08-11 13:10:01.562594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:09.999 [2024-08-11 13:10:01.562612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:09.999 [2024-08-11 13:10:01.562621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:09.999 [2024-08-11 13:10:01.562629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:09.999 [2024-08-11 13:10:01.562638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:09.999 [2024-08-11 13:10:01.562649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:09.999 [2024-08-11 13:10:01.562659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:09.999 [2024-08-11 13:10:01.562676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:09.999 [2024-08-11 13:10:01.562685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562693] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:09.999 [2024-08-11 13:10:01.562702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:09.999 [2024-08-11 13:10:01.562711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:09.999 [2024-08-11 13:10:01.562730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:09.999 [2024-08-11 13:10:01.562740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:09.999 [2024-08-11 13:10:01.562749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:09.999 [2024-08-11 13:10:01.562758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:09.999 [2024-08-11 13:10:01.562767] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:09.999 [2024-08-11 13:10:01.562776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:09.999 [2024-08-11 13:10:01.562786] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:09.999 [2024-08-11 13:10:01.562800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:09.999 [2024-08-11 13:10:01.562812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:09.999 [2024-08-11 13:10:01.562822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:09.999 [2024-08-11 13:10:01.562832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:09.999 [2024-08-11 13:10:01.562842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:09.999 [2024-08-11 13:10:01.562851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:09.999 [2024-08-11 13:10:01.562861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:09.999 [2024-08-11 13:10:01.562915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:09.999 [2024-08-11 13:10:01.562928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:09.999 [2024-08-11 13:10:01.562938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:09.999 [2024-08-11 13:10:01.562948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:09.999 [2024-08-11 13:10:01.562958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:09.999 [2024-08-11 13:10:01.562968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:09.999 [2024-08-11 13:10:01.562979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:09.999 [2024-08-11 13:10:01.563001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:09.999 [2024-08-11 13:10:01.563011] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:09.999 [2024-08-11 13:10:01.563026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:09.999 [2024-08-11 13:10:01.563038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:09.999 [2024-08-11 13:10:01.563048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:09.999 [2024-08-11 13:10:01.563058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:09.999 [2024-08-11 13:10:01.563068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:09.999 [2024-08-11 13:10:01.563080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.999 [2024-08-11 13:10:01.563095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:09.999 [2024-08-11 13:10:01.563105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:28:09.999 [2024-08-11 13:10:01.563115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.999 [2024-08-11 13:10:01.576986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.999 [2024-08-11 13:10:01.577046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:09.999 [2024-08-11 13:10:01.577090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.806 ms 00:28:09.999 [2024-08-11 13:10:01.577110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.999 [2024-08-11 13:10:01.577212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.999 [2024-08-11 13:10:01.577228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:09.999 [2024-08-11 13:10:01.577245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:09.999 [2024-08-11 13:10:01.577268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.999 [2024-08-11 13:10:01.584252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.999 [2024-08-11 13:10:01.584291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:09.999 [2024-08-11 13:10:01.584321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:28:09.999 [2024-08-11 13:10:01.584331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.999 [2024-08-11 13:10:01.584376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.999 [2024-08-11 13:10:01.584394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:09.999 [2024-08-11 13:10:01.584414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:09.999 [2024-08-11 13:10:01.584423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.999 [2024-08-11 13:10:01.584562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.999 [2024-08-11 13:10:01.584580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:09.999 [2024-08-11 13:10:01.584591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:09.999 [2024-08-11 13:10:01.584612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.999 [2024-08-11 13:10:01.584746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.999 [2024-08-11 13:10:01.584763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:09.999 [2024-08-11 13:10:01.584778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:28:09.999 [2024-08-11 13:10:01.584787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.000 [2024-08-11 13:10:01.589139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.000 [2024-08-11 13:10:01.589184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.000 [2024-08-11 13:10:01.589215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.327 ms 00:28:10.000 [2024-08-11 13:10:01.589225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.000 [2024-08-11 13:10:01.589460] ftl_nv_cache.c:1724:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:10.000 [2024-08-11 13:10:01.589488] ftl_nv_cache.c:1728:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:10.000 [2024-08-11 13:10:01.589503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.000 [2024-08-11 13:10:01.589513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:10.000 [2024-08-11 13:10:01.589540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:10.000 [2024-08-11 13:10:01.589551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.601561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.601606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:10.260 [2024-08-11 13:10:01.601645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.988 ms 00:28:10.260 [2024-08-11 13:10:01.601655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.601758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.601778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:10.260 [2024-08-11 13:10:01.601789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:10.260 [2024-08-11 13:10:01.601800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.601900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.601948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:10.260 [2024-08-11 13:10:01.601978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:10.260 [2024-08-11 13:10:01.601988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.602339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.602369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:10.260 [2024-08-11 13:10:01.602394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:28:10.260 [2024-08-11 13:10:01.602404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.602432] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:10.260 [2024-08-11 13:10:01.602447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.602457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:10.260 [2024-08-11 13:10:01.602467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:10.260 [2024-08-11 13:10:01.602490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.609901] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:10.260 [2024-08-11 13:10:01.610099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.610117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:10.260 [2024-08-11 13:10:01.610128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.581 ms 00:28:10.260 [2024-08-11 13:10:01.610137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.612401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.612431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:10.260 [2024-08-11 13:10:01.612459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:28:10.260 [2024-08-11 13:10:01.612468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.612530] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:28:10.260 [2024-08-11 13:10:01.613171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.613216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:10.260 [2024-08-11 13:10:01.613229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:28:10.260 [2024-08-11 13:10:01.613239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.613306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.613322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:10.260 [2024-08-11 13:10:01.613346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:10.260 [2024-08-11 13:10:01.613356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.613418] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:10.260 [2024-08-11 13:10:01.613450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.613460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:10.260 [2024-08-11 13:10:01.613470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:10.260 [2024-08-11 13:10:01.613480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.617277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.617337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:10.260 [2024-08-11 13:10:01.617367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.771 ms 00:28:10.260 [2024-08-11 13:10:01.617377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.617457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.260 [2024-08-11 13:10:01.617485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:10.260 [2024-08-11 13:10:01.617496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:10.260 [2024-08-11 13:10:01.617505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.260 [2024-08-11 13:10:01.626867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 65.833 ms, result 0 00:28:56.367  Copying: 24/1024 [MB] (24 MBps) Copying: 46/1024 [MB] (22 MBps) Copying: 68/1024 [MB] (21 MBps) Copying: 91/1024 [MB] (22 MBps) Copying: 114/1024 [MB] (23 MBps) Copying: 137/1024 [MB] (23 MBps) Copying: 160/1024 [MB] (23 MBps) Copying: 182/1024 [MB] (22 MBps) Copying: 204/1024 [MB] (22 MBps) Copying: 227/1024 [MB] (22 MBps) Copying: 249/1024 [MB] (22 MBps) Copying: 271/1024 [MB] (22 MBps) Copying: 294/1024 [MB] (22 MBps) Copying: 317/1024 [MB] (23 MBps) Copying: 339/1024 [MB] (22 MBps) Copying: 362/1024 [MB] (22 MBps) Copying: 384/1024 [MB] (22 MBps) Copying: 406/1024 [MB] (22 MBps) Copying: 429/1024 [MB] (22 MBps) Copying: 452/1024 [MB] (22 MBps) Copying: 473/1024 [MB] (21 MBps) Copying: 495/1024 [MB] (21 MBps) Copying: 517/1024 [MB] (21 MBps) Copying: 538/1024 [MB] (21 MBps) Copying: 560/1024 [MB] (21 MBps) Copying: 582/1024 [MB] (21 MBps) Copying: 604/1024 [MB] (22 MBps) Copying: 626/1024 [MB] (22 MBps) Copying: 649/1024 [MB] (22 MBps) Copying: 671/1024 [MB] (22 MBps) Copying: 693/1024 [MB] (22 MBps) Copying: 716/1024 [MB] (22 MBps) Copying: 738/1024 [MB] (22 MBps) Copying: 761/1024 [MB] (22 MBps) Copying: 784/1024 [MB] (22 MBps) Copying: 806/1024 [MB] (22 MBps) Copying: 829/1024 [MB] (22 MBps) Copying: 852/1024 [MB] (22 MBps) Copying: 873/1024 [MB] (21 MBps) Copying: 896/1024 [MB] (22 MBps) Copying: 918/1024 [MB] (22 MBps) Copying: 940/1024 [MB] (21 MBps) Copying: 962/1024 [MB] (21 MBps) Copying: 983/1024 [MB] (21 MBps) Copying: 1005/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-08-11 13:10:47.827646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.367 [2024-08-11 13:10:47.827721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:56.367 [2024-08-11 13:10:47.827755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:56.367 [2024-08-11 13:10:47.827766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.367 [2024-08-11 13:10:47.827802] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:56.367 [2024-08-11 13:10:47.828327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.367 [2024-08-11 13:10:47.828368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:56.367 [2024-08-11 13:10:47.828382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:28:56.367 [2024-08-11 13:10:47.828393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.367 [2024-08-11 13:10:47.828649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.367 [2024-08-11 13:10:47.828675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:56.367 [2024-08-11 13:10:47.828700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:28:56.367 [2024-08-11 13:10:47.828719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.367 [2024-08-11 13:10:47.828766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.367 [2024-08-11 13:10:47.828781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:56.367 [2024-08-11 13:10:47.828792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:56.367 [2024-08-11 13:10:47.828802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.367 [2024-08-11 13:10:47.828903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.367 [2024-08-11 13:10:47.828921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:56.367 [2024-08-11 13:10:47.828934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:56.367 [2024-08-11 13:10:47.828944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.367 [2024-08-11 13:10:47.828964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:56.367 [2024-08-11 13:10:47.828981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:28:56.367 [2024-08-11 13:10:47.829004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:56.367 [2024-08-11 13:10:47.829017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:56.367 [2024-08-11 13:10:47.829028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.829990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.830000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:56.368 [2024-08-11 13:10:47.830011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:56.369 [2024-08-11 13:10:47.830022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:56.369 [2024-08-11 13:10:47.830045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:56.369 [2024-08-11 13:10:47.830056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:56.369 [2024-08-11 13:10:47.830067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:56.369 [2024-08-11 13:10:47.830077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:56.369 [2024-08-11 13:10:47.830088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:56.369 [2024-08-11 13:10:47.830098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:56.369 [2024-08-11 13:10:47.830117] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:56.369 [2024-08-11 13:10:47.830128] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f745d491-457d-424d-b208-1b4fc2dfd6d9 00:28:56.369 [2024-08-11 13:10:47.830139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:28:56.369 [2024-08-11 13:10:47.830148] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:28:56.369 [2024-08-11 13:10:47.830158] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:28:56.369 [2024-08-11 13:10:47.830169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:28:56.369 [2024-08-11 13:10:47.830179] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:56.369 [2024-08-11 13:10:47.830189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:56.369 [2024-08-11 13:10:47.830199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:56.369 [2024-08-11 13:10:47.830209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:56.369 [2024-08-11 13:10:47.830218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:56.369 [2024-08-11 13:10:47.830229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.369 [2024-08-11 13:10:47.830240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:56.369 [2024-08-11 13:10:47.830250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:28:56.369 [2024-08-11 13:10:47.830265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.831608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.369 [2024-08-11 13:10:47.831656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:56.369 [2024-08-11 13:10:47.831669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.323 ms 00:28:56.369 [2024-08-11 13:10:47.831690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.831773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.369 [2024-08-11 13:10:47.831794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:56.369 [2024-08-11 13:10:47.831805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:56.369 [2024-08-11 13:10:47.831815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.836180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.836218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.369 [2024-08-11 13:10:47.836246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.836269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.836320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.836339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.369 [2024-08-11 13:10:47.836349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.836358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.836394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.836411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.369 [2024-08-11 13:10:47.836421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.836445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.836494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.836508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.369 [2024-08-11 13:10:47.836525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.836535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.844975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.845028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.369 [2024-08-11 13:10:47.845072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.845082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.852062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.852109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.369 [2024-08-11 13:10:47.852147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.852157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.852223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.852239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:56.369 [2024-08-11 13:10:47.852249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.852258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.852283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.852295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:56.369 [2024-08-11 13:10:47.852304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.852334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.852426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.852445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:56.369 [2024-08-11 13:10:47.852456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.852466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.852499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.852516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:56.369 [2024-08-11 13:10:47.852526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.852536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.852598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.852614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:56.369 [2024-08-11 13:10:47.852625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.852635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.852709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.369 [2024-08-11 13:10:47.852725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:56.369 [2024-08-11 13:10:47.852747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.369 [2024-08-11 13:10:47.852762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.369 [2024-08-11 13:10:47.852904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 25.210 ms, result 0 00:28:56.628 00:28:56.628 00:28:56.628 13:10:48 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:58.533 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 90536 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 90536 ']' 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 90536 00:28:58.534 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90536) - No such process 00:28:58.534 Process with pid 90536 is not found 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # echo 'Process with pid 90536 is not found' 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:28:58.534 Remove shared memory files 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_band_md /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_l2p_l1 /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_l2p_l2 /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_l2p_l2_ctx /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_nvc_md /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_p2l_pool /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_sb /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_sb_shm /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_trim_bitmap /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_trim_log /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_trim_md /dev/hugepages/ftl_f745d491-457d-424d-b208-1b4fc2dfd6d9_vmap 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:28:58.534 00:28:58.534 real 3m18.014s 00:28:58.534 user 3m4.720s 00:28:58.534 sys 0m14.925s 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:58.534 ************************************ 00:28:58.534 END TEST ftl_restore_fast 00:28:58.534 13:10:49 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:58.534 ************************************ 00:28:58.534 13:10:50 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:58.534 13:10:50 ftl -- ftl/ftl.sh@14 -- # killprocess 83245 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@946 -- # '[' -z 83245 ']' 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@950 -- # kill -0 83245 00:28:58.534 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (83245) - No such process 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 83245 is not found' 00:28:58.534 Process with pid 83245 is not found 00:28:58.534 13:10:50 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:58.534 13:10:50 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=92509 00:28:58.534 13:10:50 ftl -- ftl/ftl.sh@20 -- # waitforlisten 92509 00:28:58.534 13:10:50 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@827 -- # '[' -z 92509 ']' 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:58.534 13:10:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:58.534 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:28:58.534 [2024-08-11 13:10:50.118135] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:28:58.534 [2024-08-11 13:10:50.118319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92509 ] 00:28:58.793 [2024-08-11 13:10:50.269913] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.793 [2024-08-11 13:10:50.314837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.728 13:10:50 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:59.728 13:10:50 ftl -- common/autotest_common.sh@860 -- # return 0 00:28:59.728 13:10:50 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:59.728 nvme0n1 00:28:59.728 13:10:51 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:59.728 13:10:51 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:59.728 13:10:51 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:59.987 13:10:51 ftl -- ftl/common.sh@28 -- # stores=f4947e8a-39b1-428f-948b-4520cb1be239 00:28:59.987 13:10:51 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:59.987 13:10:51 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4947e8a-39b1-428f-948b-4520cb1be239 00:29:00.246 13:10:51 ftl -- ftl/ftl.sh@23 -- # killprocess 92509 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@946 -- # '[' -z 92509 ']' 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@950 -- # kill -0 92509 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@951 -- # uname 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92509 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:00.246 killing process with pid 92509 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92509' 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@965 -- # kill 92509 00:29:00.246 13:10:51 ftl -- common/autotest_common.sh@970 -- # wait 92509 00:29:00.505 13:10:51 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:00.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:00.764 Waiting for block devices as requested 00:29:00.764 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:00.764 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:01.022 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:01.022 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:06.293 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:06.293 13:10:57 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:06.293 Remove shared memory files 00:29:06.293 13:10:57 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:06.293 13:10:57 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:06.293 13:10:57 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:06.293 13:10:57 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:06.293 13:10:57 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:06.293 13:10:57 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:06.293 00:29:06.293 real 14m0.550s 00:29:06.293 user 16m22.813s 00:29:06.293 sys 1m42.929s 00:29:06.293 13:10:57 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:06.293 13:10:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:06.293 ************************************ 00:29:06.293 END TEST ftl 00:29:06.293 ************************************ 00:29:06.293 13:10:57 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:06.293 13:10:57 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:29:06.293 13:10:57 -- spdk/autotest.sh@363 -- # '[' 0 -eq 1 ']' 00:29:06.293 13:10:57 -- spdk/autotest.sh@367 -- # '[' 0 -eq 1 ']' 00:29:06.293 13:10:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:06.293 13:10:57 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:06.293 13:10:57 -- spdk/autotest.sh@382 -- # [[ 0 -eq 1 ]] 00:29:06.293 13:10:57 -- spdk/autotest.sh@386 -- # [[ '' -eq 1 ]] 00:29:06.293 13:10:57 -- spdk/autotest.sh@391 -- # trap - SIGINT SIGTERM EXIT 00:29:06.293 13:10:57 -- spdk/autotest.sh@393 -- # timing_enter post_cleanup 00:29:06.293 13:10:57 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:06.293 13:10:57 -- common/autotest_common.sh@10 -- # set +x 00:29:06.293 13:10:57 -- spdk/autotest.sh@394 -- # autotest_cleanup 00:29:06.293 13:10:57 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:29:06.293 13:10:57 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:29:06.293 13:10:57 -- common/autotest_common.sh@10 -- # set +x 00:29:07.670 INFO: APP EXITING 00:29:07.670 INFO: killing all VMs 00:29:07.670 INFO: killing vhost app 00:29:07.670 INFO: EXIT DONE 00:29:07.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:08.497 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:08.497 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:08.497 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:08.497 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:09.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:09.324 Cleaning 00:29:09.324 Removing: /var/run/dpdk/spdk0/config 00:29:09.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:09.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:09.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:09.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:09.324 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:09.324 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:09.324 Removing: /var/run/dpdk/spdk0 00:29:09.324 Removing: /var/run/dpdk/spdk_pid68731 00:29:09.324 Removing: /var/run/dpdk/spdk_pid68881 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69080 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69162 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69190 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69302 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69312 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69471 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69536 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69608 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69700 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69772 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69812 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69843 00:29:09.324 Removing: /var/run/dpdk/spdk_pid69904 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70006 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70432 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70474 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70526 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70540 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70604 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70620 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70683 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70699 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70747 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70765 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70812 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70817 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70944 00:29:09.324 Removing: /var/run/dpdk/spdk_pid70980 00:29:09.324 Removing: /var/run/dpdk/spdk_pid71056 00:29:09.324 Removing: /var/run/dpdk/spdk_pid71206 00:29:09.324 Removing: /var/run/dpdk/spdk_pid71274 00:29:09.324 Removing: /var/run/dpdk/spdk_pid71305 00:29:09.324 Removing: /var/run/dpdk/spdk_pid71754 00:29:09.324 Removing: /var/run/dpdk/spdk_pid71841 00:29:09.324 Removing: /var/run/dpdk/spdk_pid71945 00:29:09.324 Removing: /var/run/dpdk/spdk_pid71987 00:29:09.324 Removing: /var/run/dpdk/spdk_pid72012 00:29:09.324 Removing: /var/run/dpdk/spdk_pid72083 00:29:09.324 Removing: /var/run/dpdk/spdk_pid72698 00:29:09.584 Removing: /var/run/dpdk/spdk_pid72723 00:29:09.584 Removing: /var/run/dpdk/spdk_pid73210 00:29:09.584 Removing: /var/run/dpdk/spdk_pid73297 00:29:09.584 Removing: /var/run/dpdk/spdk_pid73401 00:29:09.584 Removing: /var/run/dpdk/spdk_pid73443 00:29:09.584 Removing: /var/run/dpdk/spdk_pid73468 00:29:09.584 Removing: /var/run/dpdk/spdk_pid73494 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75301 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75427 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75431 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75449 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75493 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75497 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75509 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75554 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75558 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75570 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75615 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75619 00:29:09.584 Removing: /var/run/dpdk/spdk_pid75631 00:29:09.584 Removing: /var/run/dpdk/spdk_pid76980 00:29:09.584 Removing: /var/run/dpdk/spdk_pid77058 00:29:09.584 Removing: /var/run/dpdk/spdk_pid78446 00:29:09.584 Removing: /var/run/dpdk/spdk_pid79793 00:29:09.584 Removing: /var/run/dpdk/spdk_pid79880 00:29:09.584 Removing: /var/run/dpdk/spdk_pid79951 00:29:09.584 Removing: /var/run/dpdk/spdk_pid80027 00:29:09.584 Removing: /var/run/dpdk/spdk_pid80132 00:29:09.584 Removing: /var/run/dpdk/spdk_pid80195 00:29:09.584 Removing: /var/run/dpdk/spdk_pid80324 00:29:09.584 Removing: /var/run/dpdk/spdk_pid80671 00:29:09.584 Removing: /var/run/dpdk/spdk_pid80697 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81161 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81335 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81430 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81534 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81566 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81597 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81874 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81912 00:29:09.584 Removing: /var/run/dpdk/spdk_pid81963 00:29:09.584 Removing: /var/run/dpdk/spdk_pid82314 00:29:09.584 Removing: /var/run/dpdk/spdk_pid82458 00:29:09.584 Removing: /var/run/dpdk/spdk_pid83245 00:29:09.584 Removing: /var/run/dpdk/spdk_pid83363 00:29:09.584 Removing: /var/run/dpdk/spdk_pid83523 00:29:09.584 Removing: /var/run/dpdk/spdk_pid83609 00:29:09.584 Removing: /var/run/dpdk/spdk_pid83968 00:29:09.584 Removing: /var/run/dpdk/spdk_pid84235 00:29:09.584 Removing: /var/run/dpdk/spdk_pid84575 00:29:09.584 Removing: /var/run/dpdk/spdk_pid84747 00:29:09.584 Removing: /var/run/dpdk/spdk_pid84878 00:29:09.584 Removing: /var/run/dpdk/spdk_pid84911 00:29:09.584 Removing: /var/run/dpdk/spdk_pid85050 00:29:09.584 Removing: /var/run/dpdk/spdk_pid85064 00:29:09.584 Removing: /var/run/dpdk/spdk_pid85100 00:29:09.584 Removing: /var/run/dpdk/spdk_pid85301 00:29:09.584 Removing: /var/run/dpdk/spdk_pid85515 00:29:09.584 Removing: /var/run/dpdk/spdk_pid85935 00:29:09.584 Removing: /var/run/dpdk/spdk_pid86377 00:29:09.584 Removing: /var/run/dpdk/spdk_pid86824 00:29:09.584 Removing: /var/run/dpdk/spdk_pid87335 00:29:09.584 Removing: /var/run/dpdk/spdk_pid87464 00:29:09.584 Removing: /var/run/dpdk/spdk_pid87561 00:29:09.584 Removing: /var/run/dpdk/spdk_pid88214 00:29:09.584 Removing: /var/run/dpdk/spdk_pid88283 00:29:09.584 Removing: /var/run/dpdk/spdk_pid88726 00:29:09.584 Removing: /var/run/dpdk/spdk_pid89135 00:29:09.584 Removing: /var/run/dpdk/spdk_pid89624 00:29:09.584 Removing: /var/run/dpdk/spdk_pid89742 00:29:09.584 Removing: /var/run/dpdk/spdk_pid89773 00:29:09.584 Removing: /var/run/dpdk/spdk_pid89833 00:29:09.584 Removing: /var/run/dpdk/spdk_pid89890 00:29:09.584 Removing: /var/run/dpdk/spdk_pid89950 00:29:09.584 Removing: /var/run/dpdk/spdk_pid90134 00:29:09.584 Removing: /var/run/dpdk/spdk_pid90191 00:29:09.584 Removing: /var/run/dpdk/spdk_pid90249 00:29:09.584 Removing: /var/run/dpdk/spdk_pid90322 00:29:09.584 Removing: /var/run/dpdk/spdk_pid90349 00:29:09.584 Removing: /var/run/dpdk/spdk_pid90405 00:29:09.584 Removing: /var/run/dpdk/spdk_pid90536 00:29:09.584 Removing: /var/run/dpdk/spdk_pid90734 00:29:09.584 Removing: /var/run/dpdk/spdk_pid91121 00:29:09.584 Removing: /var/run/dpdk/spdk_pid91574 00:29:09.584 Removing: /var/run/dpdk/spdk_pid92019 00:29:09.584 Removing: /var/run/dpdk/spdk_pid92509 00:29:09.843 Clean 00:29:09.843 13:11:01 -- common/autotest_common.sh@1447 -- # return 0 00:29:09.843 13:11:01 -- spdk/autotest.sh@395 -- # timing_exit post_cleanup 00:29:09.843 13:11:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.843 13:11:01 -- common/autotest_common.sh@10 -- # set +x 00:29:09.843 13:11:01 -- spdk/autotest.sh@397 -- # timing_exit autotest 00:29:09.843 13:11:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.843 13:11:01 -- common/autotest_common.sh@10 -- # set +x 00:29:09.843 13:11:01 -- spdk/autotest.sh@398 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:09.843 13:11:01 -- spdk/autotest.sh@400 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:09.843 13:11:01 -- spdk/autotest.sh@400 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:09.843 13:11:01 -- spdk/autotest.sh@402 -- # hash lcov 00:29:09.843 13:11:01 -- spdk/autotest.sh@402 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:09.843 13:11:01 -- spdk/autotest.sh@404 -- # hostname 00:29:09.843 13:11:01 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:10.102 geninfo: WARNING: invalid characters removed from testname! 00:29:32.036 13:11:22 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:34.571 13:11:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:36.559 13:11:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:39.093 13:11:30 -- spdk/autotest.sh@408 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:41.628 13:11:32 -- spdk/autotest.sh@409 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:44.159 13:11:35 -- spdk/autotest.sh@410 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:46.062 13:11:37 -- spdk/autotest.sh@411 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:46.062 13:11:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:46.062 13:11:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:46.062 13:11:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.062 13:11:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.063 13:11:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.063 13:11:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.063 13:11:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.063 13:11:37 -- paths/export.sh@5 -- $ export PATH 00:29:46.063 13:11:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.063 13:11:37 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:46.063 13:11:37 -- common/autobuild_common.sh@447 -- $ date +%s 00:29:46.063 13:11:37 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1723381897.XXXXXX 00:29:46.063 13:11:37 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1723381897.NeR57D 00:29:46.063 13:11:37 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:29:46.063 13:11:37 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:29:46.063 13:11:37 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:29:46.063 13:11:37 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:29:46.063 13:11:37 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:46.063 13:11:37 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:46.063 13:11:37 -- common/autobuild_common.sh@463 -- $ get_config_params 00:29:46.063 13:11:37 -- common/autotest_common.sh@394 -- $ xtrace_disable 00:29:46.063 13:11:37 -- common/autotest_common.sh@10 -- $ set +x 00:29:46.063 13:11:37 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:29:46.063 13:11:37 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:29:46.063 13:11:37 -- pm/common@17 -- $ local monitor 00:29:46.063 13:11:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:46.063 13:11:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:46.063 13:11:37 -- pm/common@25 -- $ sleep 1 00:29:46.063 13:11:37 -- pm/common@21 -- $ date +%s 00:29:46.063 13:11:37 -- pm/common@21 -- $ date +%s 00:29:46.063 13:11:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1723381897 00:29:46.063 13:11:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1723381897 00:29:46.321 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1723381897_collect-cpu-load.pm.log 00:29:46.321 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1723381897_collect-vmstat.pm.log 00:29:47.257 13:11:38 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:29:47.257 13:11:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:47.257 13:11:38 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:47.257 13:11:38 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:47.257 13:11:38 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:47.257 13:11:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:47.257 13:11:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:47.257 13:11:38 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:47.257 13:11:38 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:47.257 13:11:38 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:47.257 13:11:38 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:47.257 13:11:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:47.257 13:11:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:47.257 13:11:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:47.257 13:11:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:47.257 13:11:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:47.257 13:11:38 -- pm/common@44 -- $ pid=94194 00:29:47.257 13:11:38 -- pm/common@50 -- $ kill -TERM 94194 00:29:47.257 13:11:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:47.257 13:11:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:47.257 13:11:38 -- pm/common@44 -- $ pid=94196 00:29:47.257 13:11:38 -- pm/common@50 -- $ kill -TERM 94196 00:29:47.257 + [[ -n 6022 ]] 00:29:47.257 + sudo kill 6022 00:29:47.266 [Pipeline] } 00:29:47.282 [Pipeline] // timeout 00:29:47.287 [Pipeline] } 00:29:47.301 [Pipeline] // stage 00:29:47.307 [Pipeline] } 00:29:47.321 [Pipeline] // catchError 00:29:47.330 [Pipeline] stage 00:29:47.332 [Pipeline] { (Stop VM) 00:29:47.345 [Pipeline] sh 00:29:47.625 + vagrant halt 00:29:50.158 ==> default: Halting domain... 00:29:56.735 [Pipeline] sh 00:29:57.014 + vagrant destroy -f 00:30:00.301 ==> default: Removing domain... 00:30:00.311 [Pipeline] sh 00:30:00.586 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:00.595 [Pipeline] } 00:30:00.608 [Pipeline] // stage 00:30:00.613 [Pipeline] } 00:30:00.626 [Pipeline] // dir 00:30:00.631 [Pipeline] } 00:30:00.643 [Pipeline] // wrap 00:30:00.649 [Pipeline] } 00:30:00.660 [Pipeline] // catchError 00:30:00.669 [Pipeline] stage 00:30:00.670 [Pipeline] { (Epilogue) 00:30:00.682 [Pipeline] sh 00:30:00.961 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:06.239 [Pipeline] catchError 00:30:06.241 [Pipeline] { 00:30:06.253 [Pipeline] sh 00:30:06.571 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:06.848 Artifacts sizes are good 00:30:06.857 [Pipeline] } 00:30:06.870 [Pipeline] // catchError 00:30:06.880 [Pipeline] archiveArtifacts 00:30:06.886 Archiving artifacts 00:30:07.028 [Pipeline] cleanWs 00:30:07.038 [WS-CLEANUP] Deleting project workspace... 00:30:07.038 [WS-CLEANUP] Deferred wipeout is used... 00:30:07.044 [WS-CLEANUP] done 00:30:07.046 [Pipeline] } 00:30:07.060 [Pipeline] // stage 00:30:07.064 [Pipeline] } 00:30:07.077 [Pipeline] // node 00:30:07.082 [Pipeline] End of Pipeline 00:30:07.171 Finished: SUCCESS