00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 100 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3602 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.109 The recommended git tool is: git 00:00:00.110 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.154 Fetching changes from the remote Git repository 00:00:00.156 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.194 Using shallow fetch with depth 1 00:00:00.194 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.194 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.228 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.492 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.505 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.518 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:05.518 > git config core.sparsecheckout # timeout=10 00:00:05.530 > git read-tree -mu HEAD # timeout=10 00:00:05.546 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:05.563 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:05.563 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:05.654 [Pipeline] Start of Pipeline 00:00:05.678 [Pipeline] library 00:00:05.679 Loading library shm_lib@master 00:00:05.679 Library shm_lib@master is cached. Copying from home. 00:00:05.695 [Pipeline] node 00:00:05.718 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.719 [Pipeline] { 00:00:05.728 [Pipeline] catchError 00:00:05.729 [Pipeline] { 00:00:05.744 [Pipeline] wrap 00:00:05.752 [Pipeline] { 00:00:05.760 [Pipeline] stage 00:00:05.761 [Pipeline] { (Prologue) 00:00:05.777 [Pipeline] echo 00:00:05.779 Node: VM-host-SM38 00:00:05.784 [Pipeline] cleanWs 00:00:05.796 [WS-CLEANUP] Deleting project workspace... 00:00:05.796 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.802 [WS-CLEANUP] done 00:00:06.003 [Pipeline] setCustomBuildProperty 00:00:06.107 [Pipeline] httpRequest 00:00:07.253 [Pipeline] echo 00:00:07.255 Sorcerer 10.211.164.101 is alive 00:00:07.263 [Pipeline] retry 00:00:07.265 [Pipeline] { 00:00:07.278 [Pipeline] httpRequest 00:00:07.283 HttpMethod: GET 00:00:07.283 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.283 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.301 Response Code: HTTP/1.1 200 OK 00:00:07.301 Success: Status code 200 is in the accepted range: 200,404 00:00:07.302 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:24.354 [Pipeline] } 00:00:24.371 [Pipeline] // retry 00:00:24.379 [Pipeline] sh 00:00:24.668 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:24.687 [Pipeline] httpRequest 00:00:25.123 [Pipeline] echo 00:00:25.125 Sorcerer 10.211.164.101 is alive 00:00:25.134 [Pipeline] retry 00:00:25.136 [Pipeline] { 00:00:25.150 [Pipeline] httpRequest 00:00:25.156 HttpMethod: GET 00:00:25.157 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:25.157 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:25.177 Response Code: HTTP/1.1 200 OK 00:00:25.178 Success: Status code 200 is in the accepted range: 200,404 00:00:25.178 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:31.770 [Pipeline] } 00:01:31.787 [Pipeline] // retry 00:01:31.794 [Pipeline] sh 00:01:32.082 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:34.638 [Pipeline] sh 00:01:34.922 + git -C spdk log --oneline -n5 00:01:34.922 b18e1bd62 version: v24.09.1-pre 00:01:34.922 19524ad45 version: v24.09 00:01:34.922 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:34.922 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:34.922 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:34.942 [Pipeline] withCredentials 00:01:34.953 > git --version # timeout=10 00:01:34.966 > git --version # 'git version 2.39.2' 00:01:34.986 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:34.988 [Pipeline] { 00:01:34.997 [Pipeline] retry 00:01:34.998 [Pipeline] { 00:01:35.014 [Pipeline] sh 00:01:35.298 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:35.573 [Pipeline] } 00:01:35.590 [Pipeline] // retry 00:01:35.595 [Pipeline] } 00:01:35.612 [Pipeline] // withCredentials 00:01:35.621 [Pipeline] httpRequest 00:01:36.054 [Pipeline] echo 00:01:36.056 Sorcerer 10.211.164.101 is alive 00:01:36.065 [Pipeline] retry 00:01:36.067 [Pipeline] { 00:01:36.081 [Pipeline] httpRequest 00:01:36.086 HttpMethod: GET 00:01:36.087 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.087 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.093 Response Code: HTTP/1.1 200 OK 00:01:36.094 Success: Status code 200 is in the accepted range: 200,404 00:01:36.094 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:04.044 [Pipeline] } 00:02:04.060 [Pipeline] // retry 00:02:04.067 [Pipeline] sh 00:02:04.354 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:06.286 [Pipeline] sh 00:02:06.572 + git -C dpdk log --oneline -n5 00:02:06.572 caf0f5d395 version: 22.11.4 00:02:06.572 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:06.572 dc9c799c7d vhost: fix missing spinlock unlock 00:02:06.572 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:06.572 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:06.591 [Pipeline] writeFile 00:02:06.608 [Pipeline] sh 00:02:06.894 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:06.907 [Pipeline] sh 00:02:07.192 + cat autorun-spdk.conf 00:02:07.192 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.192 SPDK_TEST_NVME=1 00:02:07.192 SPDK_TEST_FTL=1 00:02:07.192 SPDK_TEST_ISAL=1 00:02:07.192 SPDK_RUN_ASAN=1 00:02:07.192 SPDK_RUN_UBSAN=1 00:02:07.192 SPDK_TEST_XNVME=1 00:02:07.193 SPDK_TEST_NVME_FDP=1 00:02:07.193 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:07.193 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.193 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.201 RUN_NIGHTLY=1 00:02:07.203 [Pipeline] } 00:02:07.216 [Pipeline] // stage 00:02:07.231 [Pipeline] stage 00:02:07.232 [Pipeline] { (Run VM) 00:02:07.245 [Pipeline] sh 00:02:07.527 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:07.527 + echo 'Start stage prepare_nvme.sh' 00:02:07.527 Start stage prepare_nvme.sh 00:02:07.527 + [[ -n 0 ]] 00:02:07.527 + disk_prefix=ex0 00:02:07.527 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:07.527 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:07.527 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:07.527 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.527 ++ SPDK_TEST_NVME=1 00:02:07.527 ++ SPDK_TEST_FTL=1 00:02:07.527 ++ SPDK_TEST_ISAL=1 00:02:07.527 ++ SPDK_RUN_ASAN=1 00:02:07.527 ++ SPDK_RUN_UBSAN=1 00:02:07.527 ++ SPDK_TEST_XNVME=1 00:02:07.527 ++ SPDK_TEST_NVME_FDP=1 00:02:07.527 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:07.527 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.527 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.527 ++ RUN_NIGHTLY=1 00:02:07.527 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:07.527 + nvme_files=() 00:02:07.527 + declare -A nvme_files 00:02:07.527 + backend_dir=/var/lib/libvirt/images/backends 00:02:07.527 + nvme_files['nvme.img']=5G 00:02:07.527 + nvme_files['nvme-cmb.img']=5G 00:02:07.527 + nvme_files['nvme-multi0.img']=4G 00:02:07.527 + nvme_files['nvme-multi1.img']=4G 00:02:07.527 + nvme_files['nvme-multi2.img']=4G 00:02:07.527 + nvme_files['nvme-openstack.img']=8G 00:02:07.527 + nvme_files['nvme-zns.img']=5G 00:02:07.527 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:07.527 + (( SPDK_TEST_FTL == 1 )) 00:02:07.527 + nvme_files["nvme-ftl.img"]=6G 00:02:07.527 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:07.527 + nvme_files["nvme-fdp.img"]=1G 00:02:07.527 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:07.527 + for nvme in "${!nvme_files[@]}" 00:02:07.527 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:02:07.527 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.527 + for nvme in "${!nvme_files[@]}" 00:02:07.527 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:02:07.527 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:07.527 + for nvme in "${!nvme_files[@]}" 00:02:07.527 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:02:07.527 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:07.527 + for nvme in "${!nvme_files[@]}" 00:02:07.527 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:02:07.527 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:07.527 + for nvme in "${!nvme_files[@]}" 00:02:07.527 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:02:08.095 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.095 + for nvme in "${!nvme_files[@]}" 00:02:08.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:02:08.095 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.095 + for nvme in "${!nvme_files[@]}" 00:02:08.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:02:08.095 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.095 + for nvme in "${!nvme_files[@]}" 00:02:08.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:02:08.355 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:08.355 + for nvme in "${!nvme_files[@]}" 00:02:08.355 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:02:08.931 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.931 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:02:08.931 + echo 'End stage prepare_nvme.sh' 00:02:08.931 End stage prepare_nvme.sh 00:02:08.945 [Pipeline] sh 00:02:09.243 + DISTRO=fedora39 00:02:09.243 + CPUS=10 00:02:09.243 + RAM=12288 00:02:09.243 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:09.244 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:09.244 00:02:09.244 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:09.244 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:09.244 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:09.244 HELP=0 00:02:09.244 DRY_RUN=0 00:02:09.244 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:02:09.244 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:09.244 NVME_AUTO_CREATE=0 00:02:09.244 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:02:09.244 NVME_CMB=,,,, 00:02:09.244 NVME_PMR=,,,, 00:02:09.244 NVME_ZNS=,,,, 00:02:09.244 NVME_MS=true,,,, 00:02:09.244 NVME_FDP=,,,on, 00:02:09.244 SPDK_VAGRANT_DISTRO=fedora39 00:02:09.244 SPDK_VAGRANT_VMCPU=10 00:02:09.244 SPDK_VAGRANT_VMRAM=12288 00:02:09.244 SPDK_VAGRANT_PROVIDER=libvirt 00:02:09.244 SPDK_VAGRANT_HTTP_PROXY= 00:02:09.244 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:09.244 SPDK_OPENSTACK_NETWORK=0 00:02:09.244 VAGRANT_PACKAGE_BOX=0 00:02:09.244 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:09.244 FORCE_DISTRO=true 00:02:09.244 VAGRANT_BOX_VERSION= 00:02:09.244 EXTRA_VAGRANTFILES= 00:02:09.244 NIC_MODEL=e1000 00:02:09.244 00:02:09.244 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:09.244 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:11.793 Bringing machine 'default' up with 'libvirt' provider... 00:02:12.055 ==> default: Creating image (snapshot of base box volume). 00:02:12.316 ==> default: Creating domain with the following settings... 00:02:12.316 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730627800_429e1e4c733724cb9a73 00:02:12.316 ==> default: -- Domain type: kvm 00:02:12.316 ==> default: -- Cpus: 10 00:02:12.316 ==> default: -- Feature: acpi 00:02:12.316 ==> default: -- Feature: apic 00:02:12.316 ==> default: -- Feature: pae 00:02:12.316 ==> default: -- Memory: 12288M 00:02:12.316 ==> default: -- Memory Backing: hugepages: 00:02:12.316 ==> default: -- Management MAC: 00:02:12.316 ==> default: -- Loader: 00:02:12.316 ==> default: -- Nvram: 00:02:12.316 ==> default: -- Base box: spdk/fedora39 00:02:12.316 ==> default: -- Storage pool: default 00:02:12.316 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730627800_429e1e4c733724cb9a73.img (20G) 00:02:12.316 ==> default: -- Volume Cache: default 00:02:12.316 ==> default: -- Kernel: 00:02:12.316 ==> default: -- Initrd: 00:02:12.316 ==> default: -- Graphics Type: vnc 00:02:12.316 ==> default: -- Graphics Port: -1 00:02:12.316 ==> default: -- Graphics IP: 127.0.0.1 00:02:12.316 ==> default: -- Graphics Password: Not defined 00:02:12.316 ==> default: -- Video Type: cirrus 00:02:12.316 ==> default: -- Video VRAM: 9216 00:02:12.316 ==> default: -- Sound Type: 00:02:12.316 ==> default: -- Keymap: en-us 00:02:12.316 ==> default: -- TPM Path: 00:02:12.316 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:12.316 ==> default: -- Command line args: 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:12.316 ==> default: -> value=-drive, 00:02:12.316 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:12.316 ==> default: -> value=-drive, 00:02:12.316 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:12.316 ==> default: -> value=-drive, 00:02:12.316 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.316 ==> default: -> value=-drive, 00:02:12.316 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.316 ==> default: -> value=-drive, 00:02:12.316 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:12.316 ==> default: -> value=-device, 00:02:12.316 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:12.316 ==> default: -> value=-drive, 00:02:12.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:12.317 ==> default: -> value=-device, 00:02:12.317 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.317 ==> default: Creating shared folders metadata... 00:02:12.578 ==> default: Starting domain. 00:02:13.963 ==> default: Waiting for domain to get an IP address... 00:02:35.925 ==> default: Waiting for SSH to become available... 00:02:35.925 ==> default: Configuring and enabling network interfaces... 00:02:37.839 default: SSH address: 192.168.121.145:22 00:02:37.839 default: SSH username: vagrant 00:02:37.839 default: SSH auth method: private key 00:02:39.755 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:47.897 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:53.192 ==> default: Mounting SSHFS shared folder... 00:02:55.109 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:55.109 ==> default: Checking Mount.. 00:02:56.051 ==> default: Folder Successfully Mounted! 00:02:56.051 00:02:56.051 SUCCESS! 00:02:56.051 00:02:56.051 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:56.051 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:56.051 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:56.051 00:02:56.062 [Pipeline] } 00:02:56.076 [Pipeline] // stage 00:02:56.085 [Pipeline] dir 00:02:56.085 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:56.087 [Pipeline] { 00:02:56.099 [Pipeline] catchError 00:02:56.101 [Pipeline] { 00:02:56.113 [Pipeline] sh 00:02:56.397 + vagrant ssh-config --host vagrant 00:02:56.397 + sed -ne '/^Host/,$p' 00:02:56.397 + tee ssh_conf 00:02:58.975 Host vagrant 00:02:58.975 HostName 192.168.121.145 00:02:58.975 User vagrant 00:02:58.975 Port 22 00:02:58.975 UserKnownHostsFile /dev/null 00:02:58.975 StrictHostKeyChecking no 00:02:58.975 PasswordAuthentication no 00:02:58.975 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:58.975 IdentitiesOnly yes 00:02:58.975 LogLevel FATAL 00:02:58.975 ForwardAgent yes 00:02:58.975 ForwardX11 yes 00:02:58.975 00:02:58.983 [Pipeline] withEnv 00:02:58.985 [Pipeline] { 00:02:58.999 [Pipeline] sh 00:02:59.284 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:59.284 source /etc/os-release 00:02:59.284 [[ -e /image.version ]] && img=$(< /image.version) 00:02:59.284 # Minimal, systemd-like check. 00:02:59.284 if [[ -e /.dockerenv ]]; then 00:02:59.284 # Clear garbage from the node'\''s name: 00:02:59.284 # agt-er_autotest_547-896 -> autotest_547-896 00:02:59.284 # $HOSTNAME is the actual container id 00:02:59.284 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:59.284 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:59.284 # We can assume this is a mount from a host where container is running, 00:02:59.284 # so fetch its hostname to easily identify the target swarm worker. 00:02:59.284 container="$(< /etc/hostname) ($agent)" 00:02:59.284 else 00:02:59.284 # Fallback 00:02:59.284 container=$agent 00:02:59.284 fi 00:02:59.284 fi 00:02:59.284 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:59.284 ' 00:02:59.559 [Pipeline] } 00:02:59.575 [Pipeline] // withEnv 00:02:59.583 [Pipeline] setCustomBuildProperty 00:02:59.597 [Pipeline] stage 00:02:59.599 [Pipeline] { (Tests) 00:02:59.615 [Pipeline] sh 00:02:59.900 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:00.176 [Pipeline] sh 00:03:00.459 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:00.736 [Pipeline] timeout 00:03:00.737 Timeout set to expire in 50 min 00:03:00.739 [Pipeline] { 00:03:00.752 [Pipeline] sh 00:03:01.037 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:01.607 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:03:01.619 [Pipeline] sh 00:03:01.934 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:01.948 [Pipeline] sh 00:03:02.230 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:02.510 [Pipeline] sh 00:03:02.795 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:03.056 ++ readlink -f spdk_repo 00:03:03.056 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:03.056 + [[ -n /home/vagrant/spdk_repo ]] 00:03:03.056 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:03.056 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:03.056 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:03.056 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:03.056 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:03.056 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:03.056 + cd /home/vagrant/spdk_repo 00:03:03.056 + source /etc/os-release 00:03:03.056 ++ NAME='Fedora Linux' 00:03:03.056 ++ VERSION='39 (Cloud Edition)' 00:03:03.056 ++ ID=fedora 00:03:03.056 ++ VERSION_ID=39 00:03:03.056 ++ VERSION_CODENAME= 00:03:03.056 ++ PLATFORM_ID=platform:f39 00:03:03.056 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:03.056 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:03.056 ++ LOGO=fedora-logo-icon 00:03:03.056 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:03.056 ++ HOME_URL=https://fedoraproject.org/ 00:03:03.056 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:03.056 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:03.056 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:03.056 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:03.056 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:03.056 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:03.056 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:03.056 ++ SUPPORT_END=2024-11-12 00:03:03.056 ++ VARIANT='Cloud Edition' 00:03:03.056 ++ VARIANT_ID=cloud 00:03:03.056 + uname -a 00:03:03.056 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:03.056 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:03.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:03.579 Hugepages 00:03:03.579 node hugesize free / total 00:03:03.579 node0 1048576kB 0 / 0 00:03:03.579 node0 2048kB 0 / 0 00:03:03.579 00:03:03.579 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.579 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:03.579 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:03.579 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:03.841 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:03:03.841 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:03.841 + rm -f /tmp/spdk-ld-path 00:03:03.841 + source autorun-spdk.conf 00:03:03.841 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.841 ++ SPDK_TEST_NVME=1 00:03:03.841 ++ SPDK_TEST_FTL=1 00:03:03.841 ++ SPDK_TEST_ISAL=1 00:03:03.841 ++ SPDK_RUN_ASAN=1 00:03:03.841 ++ SPDK_RUN_UBSAN=1 00:03:03.841 ++ SPDK_TEST_XNVME=1 00:03:03.841 ++ SPDK_TEST_NVME_FDP=1 00:03:03.841 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:03.841 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:03.841 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.841 ++ RUN_NIGHTLY=1 00:03:03.841 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:03.841 + [[ -n '' ]] 00:03:03.841 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:03.841 + for M in /var/spdk/build-*-manifest.txt 00:03:03.841 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:03.841 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.841 + for M in /var/spdk/build-*-manifest.txt 00:03:03.841 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:03.841 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.841 + for M in /var/spdk/build-*-manifest.txt 00:03:03.841 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:03.841 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.841 ++ uname 00:03:03.841 + [[ Linux == \L\i\n\u\x ]] 00:03:03.841 + sudo dmesg -T 00:03:03.841 + sudo dmesg --clear 00:03:03.841 + dmesg_pid=5765 00:03:03.841 + [[ Fedora Linux == FreeBSD ]] 00:03:03.841 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.841 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.841 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:03.841 + [[ -x /usr/src/fio-static/fio ]] 00:03:03.841 + sudo dmesg -Tw 00:03:03.841 + export FIO_BIN=/usr/src/fio-static/fio 00:03:03.841 + FIO_BIN=/usr/src/fio-static/fio 00:03:03.841 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:03.841 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:03.841 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:03.841 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.841 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.841 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:03.841 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.841 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.841 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.841 Test configuration: 00:03:03.841 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.841 SPDK_TEST_NVME=1 00:03:03.841 SPDK_TEST_FTL=1 00:03:03.841 SPDK_TEST_ISAL=1 00:03:03.841 SPDK_RUN_ASAN=1 00:03:03.841 SPDK_RUN_UBSAN=1 00:03:03.841 SPDK_TEST_XNVME=1 00:03:03.841 SPDK_TEST_NVME_FDP=1 00:03:03.841 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:03.841 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:03.841 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.841 RUN_NIGHTLY=1 09:57:32 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:03.841 09:57:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:03.841 09:57:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:03.841 09:57:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:03.841 09:57:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.841 09:57:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.841 09:57:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.841 09:57:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.841 09:57:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.841 09:57:32 -- paths/export.sh@5 -- $ export PATH 00:03:03.841 09:57:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.841 09:57:32 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.841 09:57:32 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:04.102 09:57:32 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1730627852.XXXXXX 00:03:04.102 09:57:32 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1730627852.VuEupA 00:03:04.102 09:57:32 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:04.102 09:57:32 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:03:04.102 09:57:32 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:04.102 09:57:32 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:04.102 09:57:32 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:04.102 09:57:32 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:04.102 09:57:32 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:04.102 09:57:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:04.102 09:57:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.102 09:57:32 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:04.102 09:57:32 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:04.102 09:57:32 -- pm/common@17 -- $ local monitor 00:03:04.102 09:57:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.102 09:57:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.102 09:57:32 -- pm/common@25 -- $ sleep 1 00:03:04.102 09:57:32 -- pm/common@21 -- $ date +%s 00:03:04.102 09:57:32 -- pm/common@21 -- $ date +%s 00:03:04.102 09:57:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730627852 00:03:04.102 09:57:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730627852 00:03:04.102 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730627852_collect-cpu-load.pm.log 00:03:04.102 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730627852_collect-vmstat.pm.log 00:03:05.046 09:57:33 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:05.046 09:57:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:05.046 09:57:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:05.046 09:57:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:05.046 09:57:33 -- spdk/autobuild.sh@16 -- $ date -u 00:03:05.046 Sun Nov 3 09:57:33 AM UTC 2024 00:03:05.046 09:57:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:05.046 v24.09-rc1-9-gb18e1bd62 00:03:05.046 09:57:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:05.046 09:57:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:05.046 09:57:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:05.046 09:57:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:05.046 09:57:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.046 ************************************ 00:03:05.046 START TEST asan 00:03:05.046 ************************************ 00:03:05.046 using asan 00:03:05.046 09:57:33 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:05.046 00:03:05.046 real 0m0.000s 00:03:05.046 user 0m0.000s 00:03:05.046 sys 0m0.000s 00:03:05.046 09:57:33 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:05.046 ************************************ 00:03:05.046 END TEST asan 00:03:05.046 ************************************ 00:03:05.046 09:57:33 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:05.046 09:57:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:05.046 09:57:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:05.046 09:57:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:05.046 09:57:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:05.046 09:57:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.046 ************************************ 00:03:05.046 START TEST ubsan 00:03:05.046 ************************************ 00:03:05.046 using ubsan 00:03:05.046 ************************************ 00:03:05.046 END TEST ubsan 00:03:05.046 ************************************ 00:03:05.046 09:57:33 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:05.046 00:03:05.046 real 0m0.000s 00:03:05.046 user 0m0.000s 00:03:05.046 sys 0m0.000s 00:03:05.046 09:57:33 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:05.046 09:57:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:05.046 09:57:33 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:03:05.046 09:57:33 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:05.046 09:57:33 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:05.046 09:57:33 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:05.046 09:57:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:05.046 09:57:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.046 ************************************ 00:03:05.046 START TEST build_native_dpdk 00:03:05.046 ************************************ 00:03:05.046 09:57:33 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:05.046 09:57:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:05.308 caf0f5d395 version: 22.11.4 00:03:05.308 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:05.308 dc9c799c7d vhost: fix missing spinlock unlock 00:03:05.308 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:05.308 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:05.308 patching file config/rte_config.h 00:03:05.308 Hunk #1 succeeded at 60 (offset 1 line). 00:03:05.308 09:57:33 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:05.308 09:57:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:05.309 09:57:33 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:05.309 patching file lib/pcapng/rte_pcapng.c 00:03:05.309 Hunk #1 succeeded at 110 (offset -18 lines). 00:03:05.309 09:57:33 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:05.309 09:57:33 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:05.309 09:57:33 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:05.309 09:57:33 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:05.309 09:57:33 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:05.309 09:57:33 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:05.309 09:57:33 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:09.517 The Meson build system 00:03:09.517 Version: 1.5.0 00:03:09.517 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:09.517 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:09.517 Build type: native build 00:03:09.517 Program cat found: YES (/usr/bin/cat) 00:03:09.517 Project name: DPDK 00:03:09.517 Project version: 22.11.4 00:03:09.517 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:09.517 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:09.517 Host machine cpu family: x86_64 00:03:09.517 Host machine cpu: x86_64 00:03:09.517 Message: ## Building in Developer Mode ## 00:03:09.517 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:09.517 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:09.517 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:09.517 Program objdump found: YES (/usr/bin/objdump) 00:03:09.517 Program python3 found: YES (/usr/bin/python3) 00:03:09.517 Program cat found: YES (/usr/bin/cat) 00:03:09.517 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:09.517 Checking for size of "void *" : 8 00:03:09.517 Checking for size of "void *" : 8 (cached) 00:03:09.517 Library m found: YES 00:03:09.517 Library numa found: YES 00:03:09.517 Has header "numaif.h" : YES 00:03:09.517 Library fdt found: NO 00:03:09.517 Library execinfo found: NO 00:03:09.517 Has header "execinfo.h" : YES 00:03:09.517 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:09.517 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:09.517 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:09.517 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:09.517 Run-time dependency openssl found: YES 3.1.1 00:03:09.517 Run-time dependency libpcap found: YES 1.10.4 00:03:09.517 Has header "pcap.h" with dependency libpcap: YES 00:03:09.517 Compiler for C supports arguments -Wcast-qual: YES 00:03:09.517 Compiler for C supports arguments -Wdeprecated: YES 00:03:09.517 Compiler for C supports arguments -Wformat: YES 00:03:09.517 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:09.517 Compiler for C supports arguments -Wformat-security: NO 00:03:09.517 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:09.517 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:09.517 Compiler for C supports arguments -Wnested-externs: YES 00:03:09.517 Compiler for C supports arguments -Wold-style-definition: YES 00:03:09.517 Compiler for C supports arguments -Wpointer-arith: YES 00:03:09.517 Compiler for C supports arguments -Wsign-compare: YES 00:03:09.517 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:09.517 Compiler for C supports arguments -Wundef: YES 00:03:09.517 Compiler for C supports arguments -Wwrite-strings: YES 00:03:09.517 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:09.517 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:09.517 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:09.517 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:09.517 Compiler for C supports arguments -mavx512f: YES 00:03:09.517 Checking if "AVX512 checking" compiles: YES 00:03:09.517 Fetching value of define "__SSE4_2__" : 1 00:03:09.517 Fetching value of define "__AES__" : 1 00:03:09.517 Fetching value of define "__AVX__" : 1 00:03:09.517 Fetching value of define "__AVX2__" : 1 00:03:09.517 Fetching value of define "__AVX512BW__" : 1 00:03:09.517 Fetching value of define "__AVX512CD__" : 1 00:03:09.517 Fetching value of define "__AVX512DQ__" : 1 00:03:09.517 Fetching value of define "__AVX512F__" : 1 00:03:09.517 Fetching value of define "__AVX512VL__" : 1 00:03:09.517 Fetching value of define "__PCLMUL__" : 1 00:03:09.517 Fetching value of define "__RDRND__" : 1 00:03:09.517 Fetching value of define "__RDSEED__" : 1 00:03:09.518 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:09.518 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:09.518 Message: lib/kvargs: Defining dependency "kvargs" 00:03:09.518 Message: lib/telemetry: Defining dependency "telemetry" 00:03:09.518 Checking for function "getentropy" : YES 00:03:09.518 Message: lib/eal: Defining dependency "eal" 00:03:09.518 Message: lib/ring: Defining dependency "ring" 00:03:09.518 Message: lib/rcu: Defining dependency "rcu" 00:03:09.518 Message: lib/mempool: Defining dependency "mempool" 00:03:09.518 Message: lib/mbuf: Defining dependency "mbuf" 00:03:09.518 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:09.518 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:09.518 Compiler for C supports arguments -mpclmul: YES 00:03:09.518 Compiler for C supports arguments -maes: YES 00:03:09.518 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:09.518 Compiler for C supports arguments -mavx512bw: YES 00:03:09.518 Compiler for C supports arguments -mavx512dq: YES 00:03:09.518 Compiler for C supports arguments -mavx512vl: YES 00:03:09.518 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:09.518 Compiler for C supports arguments -mavx2: YES 00:03:09.518 Compiler for C supports arguments -mavx: YES 00:03:09.518 Message: lib/net: Defining dependency "net" 00:03:09.518 Message: lib/meter: Defining dependency "meter" 00:03:09.518 Message: lib/ethdev: Defining dependency "ethdev" 00:03:09.518 Message: lib/pci: Defining dependency "pci" 00:03:09.518 Message: lib/cmdline: Defining dependency "cmdline" 00:03:09.518 Message: lib/metrics: Defining dependency "metrics" 00:03:09.518 Message: lib/hash: Defining dependency "hash" 00:03:09.518 Message: lib/timer: Defining dependency "timer" 00:03:09.518 Fetching value of define "__AVX2__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.518 Message: lib/acl: Defining dependency "acl" 00:03:09.518 Message: lib/bbdev: Defining dependency "bbdev" 00:03:09.518 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:09.518 Run-time dependency libelf found: YES 0.191 00:03:09.518 Message: lib/bpf: Defining dependency "bpf" 00:03:09.518 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:09.518 Message: lib/compressdev: Defining dependency "compressdev" 00:03:09.518 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:09.518 Message: lib/distributor: Defining dependency "distributor" 00:03:09.518 Message: lib/efd: Defining dependency "efd" 00:03:09.518 Message: lib/eventdev: Defining dependency "eventdev" 00:03:09.518 Message: lib/gpudev: Defining dependency "gpudev" 00:03:09.518 Message: lib/gro: Defining dependency "gro" 00:03:09.518 Message: lib/gso: Defining dependency "gso" 00:03:09.518 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:09.518 Message: lib/jobstats: Defining dependency "jobstats" 00:03:09.518 Message: lib/latencystats: Defining dependency "latencystats" 00:03:09.518 Message: lib/lpm: Defining dependency "lpm" 00:03:09.518 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512IFMA__" : 1 00:03:09.518 Message: lib/member: Defining dependency "member" 00:03:09.518 Message: lib/pcapng: Defining dependency "pcapng" 00:03:09.518 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:09.518 Message: lib/power: Defining dependency "power" 00:03:09.518 Message: lib/rawdev: Defining dependency "rawdev" 00:03:09.518 Message: lib/regexdev: Defining dependency "regexdev" 00:03:09.518 Message: lib/dmadev: Defining dependency "dmadev" 00:03:09.518 Message: lib/rib: Defining dependency "rib" 00:03:09.518 Message: lib/reorder: Defining dependency "reorder" 00:03:09.518 Message: lib/sched: Defining dependency "sched" 00:03:09.518 Message: lib/security: Defining dependency "security" 00:03:09.518 Message: lib/stack: Defining dependency "stack" 00:03:09.518 Has header "linux/userfaultfd.h" : YES 00:03:09.518 Message: lib/vhost: Defining dependency "vhost" 00:03:09.518 Message: lib/ipsec: Defining dependency "ipsec" 00:03:09.518 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.518 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.518 Message: lib/fib: Defining dependency "fib" 00:03:09.518 Message: lib/port: Defining dependency "port" 00:03:09.518 Message: lib/pdump: Defining dependency "pdump" 00:03:09.518 Message: lib/table: Defining dependency "table" 00:03:09.518 Message: lib/pipeline: Defining dependency "pipeline" 00:03:09.518 Message: lib/graph: Defining dependency "graph" 00:03:09.518 Message: lib/node: Defining dependency "node" 00:03:09.518 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:09.518 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:09.518 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:09.518 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:09.518 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:09.518 Compiler for C supports arguments -Wno-unused-value: YES 00:03:09.518 Compiler for C supports arguments -Wno-format: YES 00:03:09.518 Compiler for C supports arguments -Wno-format-security: YES 00:03:09.518 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:09.518 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:09.518 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:09.518 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:10.907 Fetching value of define "__AVX2__" : 1 (cached) 00:03:10.907 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:10.907 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:10.907 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:10.907 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:10.907 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:10.907 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:10.907 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:10.907 Configuring doxy-api.conf using configuration 00:03:10.907 Program sphinx-build found: NO 00:03:10.907 Configuring rte_build_config.h using configuration 00:03:10.907 Message: 00:03:10.907 ================= 00:03:10.907 Applications Enabled 00:03:10.907 ================= 00:03:10.907 00:03:10.907 apps: 00:03:10.907 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:10.907 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:10.907 test-security-perf, 00:03:10.907 00:03:10.907 Message: 00:03:10.907 ================= 00:03:10.907 Libraries Enabled 00:03:10.907 ================= 00:03:10.907 00:03:10.907 libs: 00:03:10.907 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:10.907 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:10.907 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:10.907 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:10.907 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:10.907 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:10.907 table, pipeline, graph, node, 00:03:10.907 00:03:10.907 Message: 00:03:10.907 =============== 00:03:10.907 Drivers Enabled 00:03:10.907 =============== 00:03:10.907 00:03:10.907 common: 00:03:10.907 00:03:10.907 bus: 00:03:10.907 pci, vdev, 00:03:10.907 mempool: 00:03:10.907 ring, 00:03:10.907 dma: 00:03:10.907 00:03:10.907 net: 00:03:10.907 i40e, 00:03:10.907 raw: 00:03:10.907 00:03:10.907 crypto: 00:03:10.907 00:03:10.907 compress: 00:03:10.907 00:03:10.907 regex: 00:03:10.907 00:03:10.907 vdpa: 00:03:10.907 00:03:10.907 event: 00:03:10.907 00:03:10.907 baseband: 00:03:10.907 00:03:10.907 gpu: 00:03:10.907 00:03:10.907 00:03:10.907 Message: 00:03:10.907 ================= 00:03:10.907 Content Skipped 00:03:10.907 ================= 00:03:10.907 00:03:10.907 apps: 00:03:10.907 00:03:10.907 libs: 00:03:10.907 kni: explicitly disabled via build config (deprecated lib) 00:03:10.907 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:10.907 00:03:10.907 drivers: 00:03:10.907 common/cpt: not in enabled drivers build config 00:03:10.907 common/dpaax: not in enabled drivers build config 00:03:10.907 common/iavf: not in enabled drivers build config 00:03:10.907 common/idpf: not in enabled drivers build config 00:03:10.907 common/mvep: not in enabled drivers build config 00:03:10.907 common/octeontx: not in enabled drivers build config 00:03:10.907 bus/auxiliary: not in enabled drivers build config 00:03:10.907 bus/dpaa: not in enabled drivers build config 00:03:10.907 bus/fslmc: not in enabled drivers build config 00:03:10.907 bus/ifpga: not in enabled drivers build config 00:03:10.907 bus/vmbus: not in enabled drivers build config 00:03:10.907 common/cnxk: not in enabled drivers build config 00:03:10.907 common/mlx5: not in enabled drivers build config 00:03:10.907 common/qat: not in enabled drivers build config 00:03:10.907 common/sfc_efx: not in enabled drivers build config 00:03:10.907 mempool/bucket: not in enabled drivers build config 00:03:10.907 mempool/cnxk: not in enabled drivers build config 00:03:10.907 mempool/dpaa: not in enabled drivers build config 00:03:10.907 mempool/dpaa2: not in enabled drivers build config 00:03:10.907 mempool/octeontx: not in enabled drivers build config 00:03:10.907 mempool/stack: not in enabled drivers build config 00:03:10.907 dma/cnxk: not in enabled drivers build config 00:03:10.907 dma/dpaa: not in enabled drivers build config 00:03:10.907 dma/dpaa2: not in enabled drivers build config 00:03:10.907 dma/hisilicon: not in enabled drivers build config 00:03:10.907 dma/idxd: not in enabled drivers build config 00:03:10.907 dma/ioat: not in enabled drivers build config 00:03:10.907 dma/skeleton: not in enabled drivers build config 00:03:10.907 net/af_packet: not in enabled drivers build config 00:03:10.907 net/af_xdp: not in enabled drivers build config 00:03:10.907 net/ark: not in enabled drivers build config 00:03:10.907 net/atlantic: not in enabled drivers build config 00:03:10.907 net/avp: not in enabled drivers build config 00:03:10.907 net/axgbe: not in enabled drivers build config 00:03:10.907 net/bnx2x: not in enabled drivers build config 00:03:10.907 net/bnxt: not in enabled drivers build config 00:03:10.907 net/bonding: not in enabled drivers build config 00:03:10.907 net/cnxk: not in enabled drivers build config 00:03:10.907 net/cxgbe: not in enabled drivers build config 00:03:10.907 net/dpaa: not in enabled drivers build config 00:03:10.907 net/dpaa2: not in enabled drivers build config 00:03:10.907 net/e1000: not in enabled drivers build config 00:03:10.907 net/ena: not in enabled drivers build config 00:03:10.907 net/enetc: not in enabled drivers build config 00:03:10.907 net/enetfec: not in enabled drivers build config 00:03:10.907 net/enic: not in enabled drivers build config 00:03:10.907 net/failsafe: not in enabled drivers build config 00:03:10.907 net/fm10k: not in enabled drivers build config 00:03:10.907 net/gve: not in enabled drivers build config 00:03:10.907 net/hinic: not in enabled drivers build config 00:03:10.907 net/hns3: not in enabled drivers build config 00:03:10.907 net/iavf: not in enabled drivers build config 00:03:10.907 net/ice: not in enabled drivers build config 00:03:10.907 net/idpf: not in enabled drivers build config 00:03:10.907 net/igc: not in enabled drivers build config 00:03:10.907 net/ionic: not in enabled drivers build config 00:03:10.907 net/ipn3ke: not in enabled drivers build config 00:03:10.907 net/ixgbe: not in enabled drivers build config 00:03:10.907 net/kni: not in enabled drivers build config 00:03:10.907 net/liquidio: not in enabled drivers build config 00:03:10.907 net/mana: not in enabled drivers build config 00:03:10.907 net/memif: not in enabled drivers build config 00:03:10.907 net/mlx4: not in enabled drivers build config 00:03:10.907 net/mlx5: not in enabled drivers build config 00:03:10.907 net/mvneta: not in enabled drivers build config 00:03:10.907 net/mvpp2: not in enabled drivers build config 00:03:10.907 net/netvsc: not in enabled drivers build config 00:03:10.907 net/nfb: not in enabled drivers build config 00:03:10.907 net/nfp: not in enabled drivers build config 00:03:10.907 net/ngbe: not in enabled drivers build config 00:03:10.907 net/null: not in enabled drivers build config 00:03:10.907 net/octeontx: not in enabled drivers build config 00:03:10.907 net/octeon_ep: not in enabled drivers build config 00:03:10.907 net/pcap: not in enabled drivers build config 00:03:10.907 net/pfe: not in enabled drivers build config 00:03:10.907 net/qede: not in enabled drivers build config 00:03:10.907 net/ring: not in enabled drivers build config 00:03:10.907 net/sfc: not in enabled drivers build config 00:03:10.907 net/softnic: not in enabled drivers build config 00:03:10.907 net/tap: not in enabled drivers build config 00:03:10.907 net/thunderx: not in enabled drivers build config 00:03:10.907 net/txgbe: not in enabled drivers build config 00:03:10.907 net/vdev_netvsc: not in enabled drivers build config 00:03:10.907 net/vhost: not in enabled drivers build config 00:03:10.907 net/virtio: not in enabled drivers build config 00:03:10.907 net/vmxnet3: not in enabled drivers build config 00:03:10.907 raw/cnxk_bphy: not in enabled drivers build config 00:03:10.907 raw/cnxk_gpio: not in enabled drivers build config 00:03:10.907 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:10.907 raw/ifpga: not in enabled drivers build config 00:03:10.907 raw/ntb: not in enabled drivers build config 00:03:10.907 raw/skeleton: not in enabled drivers build config 00:03:10.907 crypto/armv8: not in enabled drivers build config 00:03:10.907 crypto/bcmfs: not in enabled drivers build config 00:03:10.907 crypto/caam_jr: not in enabled drivers build config 00:03:10.907 crypto/ccp: not in enabled drivers build config 00:03:10.907 crypto/cnxk: not in enabled drivers build config 00:03:10.907 crypto/dpaa_sec: not in enabled drivers build config 00:03:10.907 crypto/dpaa2_sec: not in enabled drivers build config 00:03:10.907 crypto/ipsec_mb: not in enabled drivers build config 00:03:10.907 crypto/mlx5: not in enabled drivers build config 00:03:10.907 crypto/mvsam: not in enabled drivers build config 00:03:10.907 crypto/nitrox: not in enabled drivers build config 00:03:10.907 crypto/null: not in enabled drivers build config 00:03:10.907 crypto/octeontx: not in enabled drivers build config 00:03:10.907 crypto/openssl: not in enabled drivers build config 00:03:10.907 crypto/scheduler: not in enabled drivers build config 00:03:10.907 crypto/uadk: not in enabled drivers build config 00:03:10.907 crypto/virtio: not in enabled drivers build config 00:03:10.907 compress/isal: not in enabled drivers build config 00:03:10.907 compress/mlx5: not in enabled drivers build config 00:03:10.907 compress/octeontx: not in enabled drivers build config 00:03:10.907 compress/zlib: not in enabled drivers build config 00:03:10.907 regex/mlx5: not in enabled drivers build config 00:03:10.908 regex/cn9k: not in enabled drivers build config 00:03:10.908 vdpa/ifc: not in enabled drivers build config 00:03:10.908 vdpa/mlx5: not in enabled drivers build config 00:03:10.908 vdpa/sfc: not in enabled drivers build config 00:03:10.908 event/cnxk: not in enabled drivers build config 00:03:10.908 event/dlb2: not in enabled drivers build config 00:03:10.908 event/dpaa: not in enabled drivers build config 00:03:10.908 event/dpaa2: not in enabled drivers build config 00:03:10.908 event/dsw: not in enabled drivers build config 00:03:10.908 event/opdl: not in enabled drivers build config 00:03:10.908 event/skeleton: not in enabled drivers build config 00:03:10.908 event/sw: not in enabled drivers build config 00:03:10.908 event/octeontx: not in enabled drivers build config 00:03:10.908 baseband/acc: not in enabled drivers build config 00:03:10.908 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:10.908 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:10.908 baseband/la12xx: not in enabled drivers build config 00:03:10.908 baseband/null: not in enabled drivers build config 00:03:10.908 baseband/turbo_sw: not in enabled drivers build config 00:03:10.908 gpu/cuda: not in enabled drivers build config 00:03:10.908 00:03:10.908 00:03:10.908 Build targets in project: 309 00:03:10.908 00:03:10.908 DPDK 22.11.4 00:03:10.908 00:03:10.908 User defined options 00:03:10.908 libdir : lib 00:03:10.908 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:10.908 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:10.908 c_link_args : 00:03:10.908 enable_docs : false 00:03:10.908 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:10.908 enable_kmods : false 00:03:10.908 machine : native 00:03:10.908 tests : false 00:03:10.908 00:03:10.908 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.908 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:10.908 09:57:39 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:10.908 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:11.167 [1/738] Generating lib/rte_kvargs_def with a custom command 00:03:11.167 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:03:11.167 [3/738] Generating lib/rte_telemetry_mingw with a custom command 00:03:11.167 [4/738] Generating lib/rte_telemetry_def with a custom command 00:03:11.167 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:11.167 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:11.167 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:11.167 [8/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:11.167 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:11.167 [10/738] Linking static target lib/librte_kvargs.a 00:03:11.167 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:11.167 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:11.167 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:11.167 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:11.167 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:11.167 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:11.425 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:11.425 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:11.425 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:11.425 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.425 [21/738] Linking target lib/librte_kvargs.so.23.0 00:03:11.425 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:11.425 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:11.425 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:11.425 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:11.425 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:11.425 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:11.425 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:11.699 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:11.699 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:11.699 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:11.699 [32/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:11.699 [33/738] Linking static target lib/librte_telemetry.a 00:03:11.699 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:11.699 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:11.699 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:11.699 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:11.699 [38/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:11.699 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:11.699 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:11.699 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:11.965 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.965 [43/738] Linking target lib/librte_telemetry.so.23.0 00:03:11.965 [44/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:11.965 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:11.965 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:11.965 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:11.965 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:11.965 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:11.965 [50/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:11.965 [51/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:11.965 [52/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.965 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.965 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:11.965 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:11.965 [56/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:11.965 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.965 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:12.224 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:12.224 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:12.224 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:12.224 [62/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:12.224 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:12.224 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:12.224 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:12.224 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:12.224 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:12.224 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:12.224 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:12.224 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:12.224 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:12.224 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:12.224 [73/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:12.224 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:12.224 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:12.224 [76/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:12.224 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:12.224 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:12.224 [79/738] Generating lib/rte_eal_def with a custom command 00:03:12.224 [80/738] Generating lib/rte_eal_mingw with a custom command 00:03:12.482 [81/738] Generating lib/rte_ring_mingw with a custom command 00:03:12.482 [82/738] Generating lib/rte_ring_def with a custom command 00:03:12.482 [83/738] Generating lib/rte_rcu_def with a custom command 00:03:12.482 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:03:12.483 [85/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:12.483 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:12.483 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:12.483 [88/738] Linking static target lib/librte_ring.a 00:03:12.483 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:12.483 [90/738] Generating lib/rte_mempool_def with a custom command 00:03:12.483 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:03:12.483 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:12.483 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:12.741 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.741 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:12.741 [96/738] Generating lib/rte_mbuf_def with a custom command 00:03:12.741 [97/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:12.741 [98/738] Generating lib/rte_mbuf_mingw with a custom command 00:03:12.741 [99/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:12.741 [100/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:12.741 [101/738] Linking static target lib/librte_eal.a 00:03:12.999 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:12.999 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:12.999 [104/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:12.999 [105/738] Linking static target lib/librte_mempool.a 00:03:12.999 [106/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:12.999 [107/738] Linking static target lib/librte_rcu.a 00:03:12.999 [108/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:12.999 [109/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:13.257 [110/738] Generating lib/rte_net_def with a custom command 00:03:13.257 [111/738] Generating lib/rte_net_mingw with a custom command 00:03:13.257 [112/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:13.257 [113/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:13.257 [114/738] Generating lib/rte_meter_def with a custom command 00:03:13.257 [115/738] Generating lib/rte_meter_mingw with a custom command 00:03:13.257 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:13.257 [117/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:13.257 [118/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:13.257 [119/738] Linking static target lib/librte_meter.a 00:03:13.257 [120/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.514 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.514 [122/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:13.514 [123/738] Linking static target lib/librte_mbuf.a 00:03:13.514 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:13.514 [125/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:13.514 [126/738] Linking static target lib/librte_net.a 00:03:13.514 [127/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.514 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:13.776 [129/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:13.776 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:13.776 [131/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.776 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:13.776 [133/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.776 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:14.035 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:14.035 [136/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:14.035 [137/738] Generating lib/rte_ethdev_def with a custom command 00:03:14.035 [138/738] Generating lib/rte_ethdev_mingw with a custom command 00:03:14.035 [139/738] Generating lib/rte_pci_def with a custom command 00:03:14.035 [140/738] Generating lib/rte_pci_mingw with a custom command 00:03:14.035 [141/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:14.035 [142/738] Linking static target lib/librte_pci.a 00:03:14.035 [143/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:14.293 [144/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.293 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:14.293 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:14.293 [147/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:14.294 [148/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:14.294 [149/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:14.294 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:14.294 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:14.294 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:14.294 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:14.294 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:14.294 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:14.294 [156/738] Generating lib/rte_cmdline_def with a custom command 00:03:14.294 [157/738] Generating lib/rte_cmdline_mingw with a custom command 00:03:14.294 [158/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:14.294 [159/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:14.551 [160/738] Generating lib/rte_metrics_def with a custom command 00:03:14.551 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:03:14.551 [162/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:14.551 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:14.551 [164/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:14.551 [165/738] Generating lib/rte_hash_def with a custom command 00:03:14.551 [166/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:14.551 [167/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:14.551 [168/738] Generating lib/rte_hash_mingw with a custom command 00:03:14.551 [169/738] Generating lib/rte_timer_def with a custom command 00:03:14.551 [170/738] Generating lib/rte_timer_mingw with a custom command 00:03:14.552 [171/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:14.552 [172/738] Linking static target lib/librte_cmdline.a 00:03:14.552 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:14.552 [174/738] Linking static target lib/librte_metrics.a 00:03:14.809 [175/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.809 [176/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:14.809 [177/738] Linking static target lib/librte_timer.a 00:03:15.067 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:15.067 [179/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:15.067 [180/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:15.067 [181/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.067 [182/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.067 [183/738] Generating lib/rte_acl_def with a custom command 00:03:15.067 [184/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:15.067 [185/738] Generating lib/rte_acl_mingw with a custom command 00:03:15.067 [186/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:15.325 [187/738] Generating lib/rte_bbdev_def with a custom command 00:03:15.325 [188/738] Linking static target lib/librte_ethdev.a 00:03:15.325 [189/738] Generating lib/rte_bbdev_mingw with a custom command 00:03:15.325 [190/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:15.325 [191/738] Generating lib/rte_bitratestats_def with a custom command 00:03:15.325 [192/738] Generating lib/rte_bitratestats_mingw with a custom command 00:03:15.582 [193/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:15.582 [194/738] Linking static target lib/librte_bitratestats.a 00:03:15.582 [195/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:15.582 [196/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.582 [197/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:15.582 [198/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:15.582 [199/738] Linking static target lib/librte_bbdev.a 00:03:15.839 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:15.839 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:16.097 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:16.097 [203/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.097 [204/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:16.097 [205/738] Linking static target lib/librte_hash.a 00:03:16.097 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:16.355 [207/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:16.355 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:16.355 [209/738] Generating lib/rte_bpf_def with a custom command 00:03:16.613 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:03:16.613 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:16.613 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:03:16.613 [213/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.613 [214/738] Generating lib/rte_cfgfile_mingw with a custom command 00:03:16.613 [215/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:16.613 [216/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:16.613 [217/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:16.613 [218/738] Linking static target lib/librte_cfgfile.a 00:03:16.613 [219/738] Generating lib/rte_compressdev_def with a custom command 00:03:16.613 [220/738] Generating lib/rte_compressdev_mingw with a custom command 00:03:16.871 [221/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:16.871 [222/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:16.871 [223/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:16.871 [224/738] Linking static target lib/librte_bpf.a 00:03:16.871 [225/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.871 [226/738] Generating lib/rte_cryptodev_def with a custom command 00:03:16.871 [227/738] Generating lib/rte_cryptodev_mingw with a custom command 00:03:16.871 [228/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:16.871 [229/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:16.871 [230/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.871 [231/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:16.871 [232/738] Linking static target lib/librte_acl.a 00:03:16.871 [233/738] Linking static target lib/librte_compressdev.a 00:03:17.130 [234/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.130 [235/738] Generating lib/rte_distributor_def with a custom command 00:03:17.130 [236/738] Generating lib/rte_distributor_mingw with a custom command 00:03:17.130 [237/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.130 [238/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:17.130 [239/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:17.130 [240/738] Generating lib/rte_efd_mingw with a custom command 00:03:17.130 [241/738] Generating lib/rte_efd_def with a custom command 00:03:17.395 [242/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.395 [243/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:17.395 [244/738] Linking target lib/librte_eal.so.23.0 00:03:17.395 [245/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:17.395 [246/738] Linking target lib/librte_ring.so.23.0 00:03:17.395 [247/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:17.395 [248/738] Linking target lib/librte_meter.so.23.0 00:03:17.395 [249/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:17.395 [250/738] Linking target lib/librte_rcu.so.23.0 00:03:17.653 [251/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:17.653 [252/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:17.653 [253/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:17.653 [254/738] Linking target lib/librte_mempool.so.23.0 00:03:17.653 [255/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.653 [256/738] Linking target lib/librte_pci.so.23.0 00:03:17.653 [257/738] Linking target lib/librte_timer.so.23.0 00:03:17.653 [258/738] Linking target lib/librte_acl.so.23.0 00:03:17.653 [259/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:17.653 [260/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:17.653 [261/738] Linking target lib/librte_cfgfile.so.23.0 00:03:17.653 [262/738] Linking static target lib/librte_distributor.a 00:03:17.653 [263/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:17.653 [264/738] Linking target lib/librte_mbuf.so.23.0 00:03:17.653 [265/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:17.653 [266/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:17.653 [267/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:17.911 [268/738] Linking target lib/librte_net.so.23.0 00:03:17.911 [269/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.911 [270/738] Linking target lib/librte_bbdev.so.23.0 00:03:17.911 [271/738] Linking target lib/librte_compressdev.so.23.0 00:03:17.911 [272/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:17.911 [273/738] Linking target lib/librte_distributor.so.23.0 00:03:17.911 [274/738] Linking target lib/librte_cmdline.so.23.0 00:03:17.911 [275/738] Linking target lib/librte_hash.so.23.0 00:03:17.911 [276/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:17.911 [277/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:17.911 [278/738] Generating lib/rte_eventdev_def with a custom command 00:03:17.911 [279/738] Linking static target lib/librte_efd.a 00:03:17.911 [280/738] Generating lib/rte_eventdev_mingw with a custom command 00:03:17.911 [281/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:17.911 [282/738] Generating lib/rte_gpudev_def with a custom command 00:03:17.911 [283/738] Generating lib/rte_gpudev_mingw with a custom command 00:03:17.912 [284/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:18.170 [285/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.170 [286/738] Linking target lib/librte_efd.so.23.0 00:03:18.170 [287/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:18.170 [288/738] Linking static target lib/librte_cryptodev.a 00:03:18.428 [289/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.428 [290/738] Linking target lib/librte_ethdev.so.23.0 00:03:18.428 [291/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:18.428 [292/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:18.428 [293/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:18.428 [294/738] Linking target lib/librte_metrics.so.23.0 00:03:18.428 [295/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:18.428 [296/738] Linking static target lib/librte_gpudev.a 00:03:18.428 [297/738] Linking target lib/librte_bpf.so.23.0 00:03:18.428 [298/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:18.428 [299/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:18.428 [300/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:18.428 [301/738] Generating lib/rte_gro_def with a custom command 00:03:18.687 [302/738] Linking target lib/librte_bitratestats.so.23.0 00:03:18.687 [303/738] Generating lib/rte_gro_mingw with a custom command 00:03:18.687 [304/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:18.687 [305/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:18.945 [306/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:18.945 [307/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:18.945 [308/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:18.945 [309/738] Generating lib/rte_gso_def with a custom command 00:03:18.945 [310/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:18.945 [311/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:18.945 [312/738] Generating lib/rte_gso_mingw with a custom command 00:03:18.945 [313/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:18.945 [314/738] Linking static target lib/librte_gro.a 00:03:18.945 [315/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:18.945 [316/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.945 [317/738] Linking static target lib/librte_eventdev.a 00:03:18.945 [318/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:18.945 [319/738] Linking target lib/librte_gpudev.so.23.0 00:03:19.204 [320/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:19.204 [321/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.204 [322/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:19.204 [323/738] Linking static target lib/librte_gso.a 00:03:19.204 [324/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:19.204 [325/738] Linking target lib/librte_gro.so.23.0 00:03:19.204 [326/738] Generating lib/rte_ip_frag_def with a custom command 00:03:19.204 [327/738] Generating lib/rte_ip_frag_mingw with a custom command 00:03:19.204 [328/738] Generating lib/rte_jobstats_def with a custom command 00:03:19.204 [329/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.204 [330/738] Generating lib/rte_jobstats_mingw with a custom command 00:03:19.204 [331/738] Linking target lib/librte_gso.so.23.0 00:03:19.204 [332/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:19.204 [333/738] Linking static target lib/librte_jobstats.a 00:03:19.204 [334/738] Generating lib/rte_latencystats_def with a custom command 00:03:19.204 [335/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:19.204 [336/738] Generating lib/rte_latencystats_mingw with a custom command 00:03:19.204 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:19.204 [338/738] Generating lib/rte_lpm_def with a custom command 00:03:19.462 [339/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:19.462 [340/738] Generating lib/rte_lpm_mingw with a custom command 00:03:19.462 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:19.462 [342/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.462 [343/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:19.462 [344/738] Linking static target lib/librte_ip_frag.a 00:03:19.462 [345/738] Linking target lib/librte_jobstats.so.23.0 00:03:19.462 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:19.462 [347/738] Linking static target lib/librte_latencystats.a 00:03:19.720 [348/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:19.720 [349/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.720 [350/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.720 [351/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:19.720 [352/738] Linking target lib/librte_cryptodev.so.23.0 00:03:19.720 [353/738] Linking target lib/librte_ip_frag.so.23.0 00:03:19.720 [354/738] Generating lib/rte_member_def with a custom command 00:03:19.720 [355/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.720 [356/738] Generating lib/rte_member_mingw with a custom command 00:03:19.720 [357/738] Linking target lib/librte_latencystats.so.23.0 00:03:19.720 [358/738] Generating lib/rte_pcapng_def with a custom command 00:03:19.720 [359/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:19.720 [360/738] Generating lib/rte_pcapng_mingw with a custom command 00:03:19.720 [361/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:19.720 [362/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:19.979 [363/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:19.979 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:19.979 [365/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:19.979 [366/738] Linking static target lib/librte_lpm.a 00:03:19.979 [367/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:19.979 [368/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:19.979 [369/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:20.237 [370/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:20.237 [371/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.237 [372/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:20.237 [373/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:20.237 [374/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:20.237 [375/738] Linking static target lib/librte_pcapng.a 00:03:20.237 [376/738] Generating lib/rte_power_def with a custom command 00:03:20.237 [377/738] Linking target lib/librte_lpm.so.23.0 00:03:20.237 [378/738] Generating lib/rte_power_mingw with a custom command 00:03:20.237 [379/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:20.237 [380/738] Generating lib/rte_rawdev_def with a custom command 00:03:20.237 [381/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.237 [382/738] Generating lib/rte_rawdev_mingw with a custom command 00:03:20.237 [383/738] Generating lib/rte_regexdev_def with a custom command 00:03:20.237 [384/738] Linking target lib/librte_eventdev.so.23.0 00:03:20.237 [385/738] Generating lib/rte_regexdev_mingw with a custom command 00:03:20.237 [386/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:20.237 [387/738] Generating lib/rte_dmadev_def with a custom command 00:03:20.496 [388/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:20.496 [389/738] Generating lib/rte_dmadev_mingw with a custom command 00:03:20.496 [390/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.496 [391/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:20.496 [392/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:20.496 [393/738] Linking target lib/librte_pcapng.so.23.0 00:03:20.496 [394/738] Generating lib/rte_rib_def with a custom command 00:03:20.496 [395/738] Generating lib/rte_rib_mingw with a custom command 00:03:20.496 [396/738] Generating lib/rte_reorder_def with a custom command 00:03:20.496 [397/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:20.496 [398/738] Linking static target lib/librte_rawdev.a 00:03:20.496 [399/738] Generating lib/rte_reorder_mingw with a custom command 00:03:20.496 [400/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:20.496 [401/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:20.496 [402/738] Linking static target lib/librte_power.a 00:03:20.496 [403/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:20.496 [404/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:20.496 [405/738] Linking static target lib/librte_dmadev.a 00:03:20.496 [406/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:20.754 [407/738] Linking static target lib/librte_regexdev.a 00:03:20.754 [408/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:20.754 [409/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:20.754 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:20.754 [411/738] Linking static target lib/librte_member.a 00:03:20.754 [412/738] Generating lib/rte_sched_def with a custom command 00:03:20.754 [413/738] Generating lib/rte_sched_mingw with a custom command 00:03:20.754 [414/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:20.754 [415/738] Generating lib/rte_security_def with a custom command 00:03:20.754 [416/738] Generating lib/rte_security_mingw with a custom command 00:03:20.754 [417/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.754 [418/738] Linking target lib/librte_rawdev.so.23.0 00:03:21.013 [419/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:21.013 [420/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:21.013 [421/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.013 [422/738] Generating lib/rte_stack_def with a custom command 00:03:21.013 [423/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.013 [424/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:21.013 [425/738] Linking target lib/librte_dmadev.so.23.0 00:03:21.013 [426/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:21.013 [427/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:21.013 [428/738] Linking target lib/librte_member.so.23.0 00:03:21.013 [429/738] Generating lib/rte_stack_mingw with a custom command 00:03:21.013 [430/738] Linking static target lib/librte_reorder.a 00:03:21.013 [431/738] Linking static target lib/librte_rib.a 00:03:21.013 [432/738] Linking static target lib/librte_stack.a 00:03:21.013 [433/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:21.013 [434/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.013 [435/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:21.013 [436/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.013 [437/738] Linking target lib/librte_stack.so.23.0 00:03:21.013 [438/738] Linking target lib/librte_regexdev.so.23.0 00:03:21.013 [439/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.272 [440/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.272 [441/738] Linking target lib/librte_reorder.so.23.0 00:03:21.272 [442/738] Linking target lib/librte_power.so.23.0 00:03:21.272 [443/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:21.272 [444/738] Linking static target lib/librte_security.a 00:03:21.272 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.272 [446/738] Linking target lib/librte_rib.so.23.0 00:03:21.530 [447/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:21.530 [448/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:21.530 [449/738] Generating lib/rte_vhost_def with a custom command 00:03:21.530 [450/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:21.530 [451/738] Generating lib/rte_vhost_mingw with a custom command 00:03:21.530 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.530 [453/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:21.530 [454/738] Linking target lib/librte_security.so.23.0 00:03:21.530 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:21.530 [456/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:21.530 [457/738] Linking static target lib/librte_sched.a 00:03:21.787 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:21.787 [459/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.787 [460/738] Linking target lib/librte_sched.so.23.0 00:03:22.046 [461/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:22.046 [462/738] Generating lib/rte_ipsec_def with a custom command 00:03:22.046 [463/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:22.046 [464/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:22.046 [465/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:22.046 [466/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:22.046 [467/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:22.046 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:22.305 [469/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:22.305 [470/738] Generating lib/rte_fib_def with a custom command 00:03:22.305 [471/738] Generating lib/rte_fib_mingw with a custom command 00:03:22.305 [472/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:22.305 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:22.305 [474/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:22.563 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:22.563 [476/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:22.563 [477/738] Linking static target lib/librte_ipsec.a 00:03:22.563 [478/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:22.822 [479/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:22.822 [480/738] Linking static target lib/librte_fib.a 00:03:22.822 [481/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.822 [482/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:22.822 [483/738] Linking target lib/librte_ipsec.so.23.0 00:03:22.822 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:22.822 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:22.822 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:22.822 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:23.083 [488/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.083 [489/738] Linking target lib/librte_fib.so.23.0 00:03:23.342 [490/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:23.342 [491/738] Generating lib/rte_port_def with a custom command 00:03:23.342 [492/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:23.342 [493/738] Generating lib/rte_port_mingw with a custom command 00:03:23.342 [494/738] Generating lib/rte_pdump_def with a custom command 00:03:23.342 [495/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:23.342 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:03:23.342 [497/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:23.342 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:23.601 [499/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:23.601 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:23.601 [501/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:23.601 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:23.601 [503/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:23.859 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:23.859 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:23.859 [506/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:23.859 [507/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:23.859 [508/738] Linking static target lib/librte_port.a 00:03:23.859 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:23.859 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:24.118 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:24.118 [512/738] Linking static target lib/librte_pdump.a 00:03:24.118 [513/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:24.118 [514/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.118 [515/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.118 [516/738] Linking target lib/librte_pdump.so.23.0 00:03:24.377 [517/738] Linking target lib/librte_port.so.23.0 00:03:24.377 [518/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:24.377 [519/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:24.377 [520/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:24.377 [521/738] Generating lib/rte_table_def with a custom command 00:03:24.377 [522/738] Generating lib/rte_table_mingw with a custom command 00:03:24.377 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:24.377 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:24.377 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:24.638 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:24.638 [527/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:24.638 [528/738] Linking static target lib/librte_table.a 00:03:24.638 [529/738] Generating lib/rte_pipeline_def with a custom command 00:03:24.638 [530/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:24.638 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:24.638 [532/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.896 [533/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:24.896 [534/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:24.896 [535/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.896 [536/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:24.896 [537/738] Linking target lib/librte_table.so.23.0 00:03:24.896 [538/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:25.155 [539/738] Generating lib/rte_graph_def with a custom command 00:03:25.155 [540/738] Generating lib/rte_graph_mingw with a custom command 00:03:25.155 [541/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:25.155 [542/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:25.155 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:25.155 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:25.155 [545/738] Linking static target lib/librte_graph.a 00:03:25.413 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:25.413 [547/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:25.413 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:25.413 [549/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:25.413 [550/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:25.671 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:25.671 [552/738] Generating lib/rte_node_def with a custom command 00:03:25.671 [553/738] Generating lib/rte_node_mingw with a custom command 00:03:25.671 [554/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:25.671 [555/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.671 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:25.671 [557/738] Linking target lib/librte_graph.so.23.0 00:03:25.671 [558/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:25.929 [559/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:25.929 [560/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:25.929 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:25.929 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:25.929 [563/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:25.929 [564/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:25.929 [565/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:25.929 [566/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:25.929 [567/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:25.929 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:25.929 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:25.929 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:25.929 [571/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:25.929 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:26.188 [573/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:26.188 [574/738] Linking static target lib/librte_node.a 00:03:26.188 [575/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:26.188 [576/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:26.188 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:26.188 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:26.188 [579/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.188 [580/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.188 [581/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.188 [582/738] Linking static target drivers/librte_bus_vdev.a 00:03:26.188 [583/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:26.188 [584/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.188 [585/738] Linking static target drivers/librte_bus_pci.a 00:03:26.188 [586/738] Linking target lib/librte_node.so.23.0 00:03:26.188 [587/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.188 [588/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.446 [589/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.446 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:26.446 [591/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:26.446 [592/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.446 [593/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:26.446 [594/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:26.446 [595/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:26.446 [596/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:26.705 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:26.705 [598/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:26.705 [599/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:26.705 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:26.705 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:26.705 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.705 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:26.705 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.705 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:26.963 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:27.221 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:27.491 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:27.491 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:27.491 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:27.752 [611/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:27.752 [612/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:27.752 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:27.752 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:28.011 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:28.011 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:28.011 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:28.011 [618/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:28.011 [619/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:28.578 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:28.578 [621/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:28.837 [622/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:28.837 [623/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:28.837 [624/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:28.837 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:29.096 [626/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:29.096 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:29.096 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:29.096 [629/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:29.096 [630/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:29.096 [631/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:29.355 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:29.614 [633/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:29.614 [634/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:29.614 [635/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:29.614 [636/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:29.614 [637/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:29.872 [638/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:29.872 [639/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:29.872 [640/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:29.872 [641/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:29.872 [642/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:29.872 [643/738] Linking static target drivers/librte_net_i40e.a 00:03:29.872 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:29.872 [645/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:30.130 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:30.130 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:30.130 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:30.388 [649/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.388 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:30.388 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:30.388 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:30.388 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:30.388 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:30.388 [655/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:30.388 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:30.646 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:30.646 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:30.646 [659/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:30.646 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:30.903 [661/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:30.903 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:30.904 [663/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:30.904 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:30.904 [665/738] Linking static target lib/librte_vhost.a 00:03:30.904 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:31.186 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:31.460 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:31.460 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:31.461 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:31.461 [671/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.724 [672/738] Linking target lib/librte_vhost.so.23.0 00:03:31.724 [673/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:31.724 [674/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:31.724 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:31.724 [676/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:31.724 [677/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:31.981 [678/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:31.981 [679/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:31.981 [680/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:31.981 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:31.981 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:31.981 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:32.239 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:32.239 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:32.239 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:32.239 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:32.239 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:32.497 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:32.497 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:32.497 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:32.755 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:32.755 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:32.755 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:33.013 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:33.013 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:33.013 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:33.013 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:33.271 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:33.271 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:33.530 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:33.530 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:33.530 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:33.530 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:33.530 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:33.788 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:33.788 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:34.046 [708/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:34.046 [709/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:34.046 [710/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:34.304 [711/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:34.304 [712/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:34.304 [713/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:34.304 [714/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:34.304 [715/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:34.561 [716/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:34.561 [717/738] Linking static target lib/librte_pipeline.a 00:03:34.561 [718/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:34.819 [719/738] Linking target app/dpdk-test-cmdline 00:03:34.819 [720/738] Linking target app/dpdk-dumpcap 00:03:34.819 [721/738] Linking target app/dpdk-pdump 00:03:34.819 [722/738] Linking target app/dpdk-test-compress-perf 00:03:34.819 [723/738] Linking target app/dpdk-test-bbdev 00:03:34.819 [724/738] Linking target app/dpdk-test-acl 00:03:34.819 [725/738] Linking target app/dpdk-proc-info 00:03:34.819 [726/738] Linking target app/dpdk-test-crypto-perf 00:03:34.819 [727/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:35.078 [728/738] Linking target app/dpdk-test-fib 00:03:35.078 [729/738] Linking target app/dpdk-test-flow-perf 00:03:35.078 [730/738] Linking target app/dpdk-test-pipeline 00:03:35.078 [731/738] Linking target app/dpdk-test-eventdev 00:03:35.078 [732/738] Linking target app/dpdk-test-regex 00:03:35.078 [733/738] Linking target app/dpdk-testpmd 00:03:35.078 [734/738] Linking target app/dpdk-test-sad 00:03:35.078 [735/738] Linking target app/dpdk-test-gpudev 00:03:35.337 [736/738] Linking target app/dpdk-test-security-perf 00:03:37.870 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.870 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:37.870 09:58:05 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:37.870 09:58:05 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:37.870 09:58:05 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:37.870 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:37.870 [0/1] Installing files. 00:03:38.131 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.131 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.133 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:38.134 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:38.135 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:38.136 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:38.136 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:38.136 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.136 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.403 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:38.404 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:38.404 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:38.404 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.404 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:38.404 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:38.407 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:38.407 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:38.407 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:38.407 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:38.407 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:38.407 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:38.407 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:38.407 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:38.407 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:38.407 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:38.407 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:38.407 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:38.407 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:38.407 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:38.407 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:38.407 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:38.407 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:38.407 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:38.407 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:38.407 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:38.407 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:38.407 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:38.407 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:38.407 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:38.407 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:38.407 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:38.407 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:38.407 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:38.407 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:38.407 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:38.407 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:38.407 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:38.407 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:38.407 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:38.407 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:38.407 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:38.407 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:38.407 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:38.407 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:38.407 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:38.407 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:38.407 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:38.407 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:38.407 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:38.407 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:38.407 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:38.407 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:38.407 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:38.407 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:38.407 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:38.407 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:38.407 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:38.407 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:38.407 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:38.407 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:38.407 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:38.407 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:38.407 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:38.407 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:38.407 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:38.407 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:38.407 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:38.407 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:38.407 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:38.407 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:38.407 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:38.407 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:38.407 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:38.407 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:38.407 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:38.407 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:38.407 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:38.407 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:38.408 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:38.408 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:38.408 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:38.408 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:38.408 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:38.408 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:38.408 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:38.408 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:38.408 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:38.408 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:38.408 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:38.408 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:38.408 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:38.408 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:38.408 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:38.408 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:38.408 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:38.408 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:38.408 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:38.408 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:38.408 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:38.408 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:38.408 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:38.408 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:38.408 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:38.408 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:38.408 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:38.408 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:38.408 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:38.408 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:38.408 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:38.408 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:38.408 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:38.408 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:38.408 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:38.408 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:38.408 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:38.408 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:38.408 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:38.408 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:38.408 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:38.408 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:38.408 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:38.408 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:38.408 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:38.408 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:38.408 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:38.408 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:38.408 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:38.408 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:38.408 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:38.408 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:38.408 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:38.408 09:58:06 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:38.408 ************************************ 00:03:38.408 END TEST build_native_dpdk 00:03:38.408 ************************************ 00:03:38.408 09:58:06 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:38.408 00:03:38.408 real 0m33.332s 00:03:38.408 user 3m28.230s 00:03:38.408 sys 0m31.915s 00:03:38.408 09:58:06 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:38.408 09:58:06 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:38.690 09:58:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:38.690 09:58:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:38.690 09:58:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:38.690 09:58:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:38.690 09:58:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:38.690 09:58:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:38.690 09:58:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:38.690 09:58:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:38.690 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:38.690 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.690 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:38.690 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:38.950 Using 'verbs' RDMA provider 00:03:49.887 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:02.122 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:02.122 Creating mk/config.mk...done. 00:04:02.122 Creating mk/cc.flags.mk...done. 00:04:02.122 Type 'make' to build. 00:04:02.122 09:58:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:02.122 09:58:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:02.122 09:58:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:02.122 09:58:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:02.122 ************************************ 00:04:02.122 START TEST make 00:04:02.122 ************************************ 00:04:02.122 09:58:29 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:02.122 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:02.122 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:02.122 meson setup builddir \ 00:04:02.122 -Dwith-libaio=enabled \ 00:04:02.122 -Dwith-liburing=enabled \ 00:04:02.122 -Dwith-libvfn=disabled \ 00:04:02.122 -Dwith-spdk=false && \ 00:04:02.122 meson compile -C builddir && \ 00:04:02.122 cd -) 00:04:02.122 make[1]: Nothing to be done for 'all'. 00:04:04.025 The Meson build system 00:04:04.025 Version: 1.5.0 00:04:04.025 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:04.025 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:04.025 Build type: native build 00:04:04.025 Project name: xnvme 00:04:04.025 Project version: 0.7.3 00:04:04.025 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:04.025 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:04.025 Host machine cpu family: x86_64 00:04:04.025 Host machine cpu: x86_64 00:04:04.025 Message: host_machine.system: linux 00:04:04.025 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:04.025 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:04.025 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:04.025 Run-time dependency threads found: YES 00:04:04.025 Has header "setupapi.h" : NO 00:04:04.025 Has header "linux/blkzoned.h" : YES 00:04:04.025 Has header "linux/blkzoned.h" : YES (cached) 00:04:04.025 Has header "libaio.h" : YES 00:04:04.025 Library aio found: YES 00:04:04.025 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:04.025 Run-time dependency liburing found: YES 2.2 00:04:04.025 Dependency libvfn skipped: feature with-libvfn disabled 00:04:04.025 Run-time dependency appleframeworks found: NO (tried framework) 00:04:04.025 Run-time dependency appleframeworks found: NO (tried framework) 00:04:04.025 Configuring xnvme_config.h using configuration 00:04:04.025 Configuring xnvme.spec using configuration 00:04:04.025 Run-time dependency bash-completion found: YES 2.11 00:04:04.025 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:04.025 Program cp found: YES (/usr/bin/cp) 00:04:04.025 Has header "winsock2.h" : NO 00:04:04.026 Has header "dbghelp.h" : NO 00:04:04.026 Library rpcrt4 found: NO 00:04:04.026 Library rt found: YES 00:04:04.026 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:04.026 Found CMake: /usr/bin/cmake (3.27.7) 00:04:04.026 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:04.026 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:04.026 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:04.026 Build targets in project: 32 00:04:04.026 00:04:04.026 xnvme 0.7.3 00:04:04.026 00:04:04.026 User defined options 00:04:04.026 with-libaio : enabled 00:04:04.026 with-liburing: enabled 00:04:04.026 with-libvfn : disabled 00:04:04.026 with-spdk : false 00:04:04.026 00:04:04.026 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:04.026 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:04.026 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:04.026 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:04.026 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:04.026 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:04.284 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:04.284 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:04.284 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:04.284 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:04.284 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:04.284 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:04.284 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:04.284 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:04.284 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:04.284 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:04.284 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:04.284 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:04.284 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:04.284 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:04.284 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:04.284 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:04.284 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:04.284 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:04.284 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:04.284 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:04.284 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:04.284 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:04.284 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:04.284 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:04.284 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:04.284 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:04.542 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:04.542 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:04.542 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:04.542 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:04.542 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:04.542 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:04.542 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:04.542 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:04.542 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:04.542 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:04.542 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:04.542 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:04.542 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:04.542 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:04.542 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:04.542 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:04.542 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:04.542 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:04.542 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:04.542 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:04.542 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:04.542 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:04.542 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:04.542 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:04.542 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:04.542 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:04.542 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:04.542 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:04.542 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:04.542 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:04.542 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:04.542 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:04.542 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:04.542 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:04.542 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:04.801 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:04.801 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:04.801 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:04.801 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:04.801 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:04.801 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:04.801 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:04.801 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:04.801 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:04.801 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:04.801 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:04.801 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:04.801 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:04.801 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:04.801 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:04.801 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:04.801 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:04.801 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:05.059 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:05.059 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:05.059 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:05.059 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:05.059 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:05.059 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:05.059 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:05.059 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:05.059 [92/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:05.059 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:05.059 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:05.059 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:05.059 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:05.059 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:05.059 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:05.059 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:05.059 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:05.059 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:05.059 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:05.059 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:05.059 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:05.059 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:05.059 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:05.059 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:05.059 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:05.059 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:05.059 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:05.059 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:05.059 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:05.059 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:05.059 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:05.059 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:05.059 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:05.059 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:05.059 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:05.059 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:05.317 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:05.317 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:05.317 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:05.317 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:05.317 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:05.317 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:05.317 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:05.317 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:05.317 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:05.317 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:05.317 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:05.317 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:05.317 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:05.317 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:05.317 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:05.317 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:05.317 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:05.317 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:05.317 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:05.317 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:05.317 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:05.317 [141/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:05.575 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:05.575 [143/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:05.575 [144/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:05.575 [145/203] Linking target lib/libxnvme.so 00:04:05.575 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:05.575 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:05.575 [148/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:05.575 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:05.575 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:05.575 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:05.575 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:05.575 [153/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:05.575 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:05.575 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:05.575 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:05.575 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:05.575 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:05.575 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:05.575 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:05.575 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:05.833 [162/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:05.833 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:05.833 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:05.833 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:05.833 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:05.833 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:05.833 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:05.833 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:05.833 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:05.833 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:05.833 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:05.833 [173/203] Linking static target lib/libxnvme.a 00:04:06.093 [174/203] Linking target tests/xnvme_tests_async_intf 00:04:06.093 [175/203] Linking target tests/xnvme_tests_cli 00:04:06.093 [176/203] Linking target tests/xnvme_tests_lblk 00:04:06.093 [177/203] Linking target tests/xnvme_tests_buf 00:04:06.093 [178/203] Linking target tests/xnvme_tests_ioworker 00:04:06.093 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:04:06.093 [180/203] Linking target tests/xnvme_tests_enum 00:04:06.093 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:06.093 [182/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:06.093 [183/203] Linking target tests/xnvme_tests_scc 00:04:06.093 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:06.093 [185/203] Linking target tests/xnvme_tests_znd_state 00:04:06.093 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:06.093 [187/203] Linking target tools/lblk 00:04:06.093 [188/203] Linking target tests/xnvme_tests_map 00:04:06.093 [189/203] Linking target tests/xnvme_tests_kvs 00:04:06.093 [190/203] Linking target tests/xnvme_tests_znd_append 00:04:06.093 [191/203] Linking target tools/zoned 00:04:06.093 [192/203] Linking target tools/xnvme_file 00:04:06.093 [193/203] Linking target tools/xdd 00:04:06.093 [194/203] Linking target tools/xnvme 00:04:06.093 [195/203] Linking target tools/kvs 00:04:06.093 [196/203] Linking target examples/xnvme_io_async 00:04:06.094 [197/203] Linking target examples/xnvme_dev 00:04:06.094 [198/203] Linking target examples/xnvme_enum 00:04:06.094 [199/203] Linking target examples/xnvme_hello 00:04:06.094 [200/203] Linking target examples/xnvme_single_async 00:04:06.094 [201/203] Linking target examples/zoned_io_sync 00:04:06.094 [202/203] Linking target examples/zoned_io_async 00:04:06.094 [203/203] Linking target examples/xnvme_single_sync 00:04:06.094 INFO: autodetecting backend as ninja 00:04:06.094 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:06.094 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:38.177 CC lib/log/log.o 00:04:38.177 CC lib/ut/ut.o 00:04:38.177 CC lib/log/log_deprecated.o 00:04:38.177 CC lib/log/log_flags.o 00:04:38.177 CC lib/ut_mock/mock.o 00:04:38.177 LIB libspdk_log.a 00:04:38.177 LIB libspdk_ut.a 00:04:38.177 LIB libspdk_ut_mock.a 00:04:38.177 SO libspdk_log.so.7.0 00:04:38.177 SO libspdk_ut.so.2.0 00:04:38.177 SO libspdk_ut_mock.so.6.0 00:04:38.177 SYMLINK libspdk_ut.so 00:04:38.177 SYMLINK libspdk_log.so 00:04:38.177 SYMLINK libspdk_ut_mock.so 00:04:38.177 CC lib/ioat/ioat.o 00:04:38.177 CC lib/util/bit_array.o 00:04:38.177 CC lib/util/cpuset.o 00:04:38.177 CC lib/util/base64.o 00:04:38.177 CC lib/util/crc32.o 00:04:38.177 CC lib/util/crc16.o 00:04:38.177 CC lib/util/crc32c.o 00:04:38.177 CXX lib/trace_parser/trace.o 00:04:38.177 CC lib/dma/dma.o 00:04:38.177 CC lib/vfio_user/host/vfio_user_pci.o 00:04:38.177 CC lib/util/crc32_ieee.o 00:04:38.177 CC lib/util/crc64.o 00:04:38.177 CC lib/util/dif.o 00:04:38.177 CC lib/util/fd.o 00:04:38.177 LIB libspdk_dma.a 00:04:38.177 CC lib/util/fd_group.o 00:04:38.177 LIB libspdk_ioat.a 00:04:38.177 CC lib/util/file.o 00:04:38.177 SO libspdk_dma.so.5.0 00:04:38.177 CC lib/util/hexlify.o 00:04:38.177 SO libspdk_ioat.so.7.0 00:04:38.177 CC lib/vfio_user/host/vfio_user.o 00:04:38.177 SYMLINK libspdk_dma.so 00:04:38.177 CC lib/util/iov.o 00:04:38.177 CC lib/util/math.o 00:04:38.177 CC lib/util/net.o 00:04:38.177 SYMLINK libspdk_ioat.so 00:04:38.177 CC lib/util/pipe.o 00:04:38.177 CC lib/util/strerror_tls.o 00:04:38.177 CC lib/util/string.o 00:04:38.177 CC lib/util/uuid.o 00:04:38.177 CC lib/util/xor.o 00:04:38.177 CC lib/util/zipf.o 00:04:38.177 CC lib/util/md5.o 00:04:38.177 LIB libspdk_vfio_user.a 00:04:38.177 SO libspdk_vfio_user.so.5.0 00:04:38.177 SYMLINK libspdk_vfio_user.so 00:04:38.177 LIB libspdk_util.a 00:04:38.177 SO libspdk_util.so.10.0 00:04:38.177 LIB libspdk_trace_parser.a 00:04:38.177 SYMLINK libspdk_util.so 00:04:38.177 SO libspdk_trace_parser.so.6.0 00:04:38.177 SYMLINK libspdk_trace_parser.so 00:04:38.177 CC lib/rdma_provider/common.o 00:04:38.177 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:38.177 CC lib/json/json_parse.o 00:04:38.177 CC lib/json/json_util.o 00:04:38.177 CC lib/json/json_write.o 00:04:38.177 CC lib/idxd/idxd.o 00:04:38.177 CC lib/env_dpdk/env.o 00:04:38.177 CC lib/rdma_utils/rdma_utils.o 00:04:38.177 CC lib/conf/conf.o 00:04:38.177 CC lib/vmd/vmd.o 00:04:38.177 CC lib/vmd/led.o 00:04:38.177 LIB libspdk_rdma_provider.a 00:04:38.177 SO libspdk_rdma_provider.so.6.0 00:04:38.177 CC lib/idxd/idxd_user.o 00:04:38.177 LIB libspdk_conf.a 00:04:38.177 CC lib/idxd/idxd_kernel.o 00:04:38.177 SO libspdk_conf.so.6.0 00:04:38.177 SYMLINK libspdk_rdma_provider.so 00:04:38.177 LIB libspdk_rdma_utils.a 00:04:38.177 CC lib/env_dpdk/memory.o 00:04:38.177 SO libspdk_rdma_utils.so.1.0 00:04:38.177 SYMLINK libspdk_conf.so 00:04:38.177 LIB libspdk_json.a 00:04:38.177 CC lib/env_dpdk/pci.o 00:04:38.177 CC lib/env_dpdk/init.o 00:04:38.177 SO libspdk_json.so.6.0 00:04:38.177 SYMLINK libspdk_rdma_utils.so 00:04:38.177 CC lib/env_dpdk/threads.o 00:04:38.177 SYMLINK libspdk_json.so 00:04:38.177 CC lib/env_dpdk/pci_ioat.o 00:04:38.177 CC lib/env_dpdk/pci_virtio.o 00:04:38.177 CC lib/env_dpdk/pci_vmd.o 00:04:38.177 CC lib/env_dpdk/pci_idxd.o 00:04:38.177 CC lib/jsonrpc/jsonrpc_server.o 00:04:38.177 CC lib/env_dpdk/pci_event.o 00:04:38.177 CC lib/env_dpdk/sigbus_handler.o 00:04:38.177 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:38.177 CC lib/env_dpdk/pci_dpdk.o 00:04:38.177 LIB libspdk_idxd.a 00:04:38.177 CC lib/jsonrpc/jsonrpc_client.o 00:04:38.177 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:38.177 SO libspdk_idxd.so.12.1 00:04:38.177 LIB libspdk_vmd.a 00:04:38.177 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:38.177 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:38.177 SYMLINK libspdk_idxd.so 00:04:38.177 SO libspdk_vmd.so.6.0 00:04:38.177 SYMLINK libspdk_vmd.so 00:04:38.178 LIB libspdk_jsonrpc.a 00:04:38.178 SO libspdk_jsonrpc.so.6.0 00:04:38.178 SYMLINK libspdk_jsonrpc.so 00:04:38.178 LIB libspdk_env_dpdk.a 00:04:38.178 SO libspdk_env_dpdk.so.15.0 00:04:38.178 CC lib/rpc/rpc.o 00:04:38.178 SYMLINK libspdk_env_dpdk.so 00:04:38.178 LIB libspdk_rpc.a 00:04:38.178 SO libspdk_rpc.so.6.0 00:04:38.178 SYMLINK libspdk_rpc.so 00:04:38.178 CC lib/notify/notify.o 00:04:38.178 CC lib/keyring/keyring.o 00:04:38.178 CC lib/notify/notify_rpc.o 00:04:38.178 CC lib/keyring/keyring_rpc.o 00:04:38.178 CC lib/trace/trace.o 00:04:38.178 CC lib/trace/trace_flags.o 00:04:38.178 CC lib/trace/trace_rpc.o 00:04:38.178 LIB libspdk_notify.a 00:04:38.178 SO libspdk_notify.so.6.0 00:04:38.178 LIB libspdk_keyring.a 00:04:38.178 SYMLINK libspdk_notify.so 00:04:38.178 SO libspdk_keyring.so.2.0 00:04:38.178 LIB libspdk_trace.a 00:04:38.178 SO libspdk_trace.so.11.0 00:04:38.178 SYMLINK libspdk_keyring.so 00:04:38.178 SYMLINK libspdk_trace.so 00:04:38.178 CC lib/thread/iobuf.o 00:04:38.178 CC lib/thread/thread.o 00:04:38.178 CC lib/sock/sock.o 00:04:38.178 CC lib/sock/sock_rpc.o 00:04:38.178 LIB libspdk_sock.a 00:04:38.178 SO libspdk_sock.so.10.0 00:04:38.178 SYMLINK libspdk_sock.so 00:04:38.178 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:38.178 CC lib/nvme/nvme_fabric.o 00:04:38.178 CC lib/nvme/nvme_ctrlr.o 00:04:38.178 CC lib/nvme/nvme_ns_cmd.o 00:04:38.178 CC lib/nvme/nvme_ns.o 00:04:38.178 CC lib/nvme/nvme_qpair.o 00:04:38.178 CC lib/nvme/nvme_pcie_common.o 00:04:38.178 CC lib/nvme/nvme_pcie.o 00:04:38.178 CC lib/nvme/nvme.o 00:04:38.747 CC lib/nvme/nvme_quirks.o 00:04:38.747 CC lib/nvme/nvme_transport.o 00:04:38.747 CC lib/nvme/nvme_discovery.o 00:04:38.747 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:38.747 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:38.747 CC lib/nvme/nvme_tcp.o 00:04:39.006 CC lib/nvme/nvme_opal.o 00:04:39.006 LIB libspdk_thread.a 00:04:39.006 SO libspdk_thread.so.10.1 00:04:39.006 CC lib/nvme/nvme_io_msg.o 00:04:39.006 SYMLINK libspdk_thread.so 00:04:39.006 CC lib/nvme/nvme_poll_group.o 00:04:39.264 CC lib/nvme/nvme_zns.o 00:04:39.264 CC lib/nvme/nvme_stubs.o 00:04:39.264 CC lib/nvme/nvme_auth.o 00:04:39.522 CC lib/blob/blobstore.o 00:04:39.522 CC lib/blob/request.o 00:04:39.522 CC lib/accel/accel.o 00:04:39.522 CC lib/init/json_config.o 00:04:39.522 CC lib/init/subsystem.o 00:04:39.522 CC lib/init/subsystem_rpc.o 00:04:39.782 CC lib/init/rpc.o 00:04:39.782 CC lib/nvme/nvme_cuse.o 00:04:39.782 CC lib/accel/accel_rpc.o 00:04:39.782 CC lib/accel/accel_sw.o 00:04:39.782 CC lib/nvme/nvme_rdma.o 00:04:39.782 LIB libspdk_init.a 00:04:39.782 CC lib/blob/zeroes.o 00:04:39.782 SO libspdk_init.so.6.0 00:04:40.042 CC lib/blob/blob_bs_dev.o 00:04:40.042 SYMLINK libspdk_init.so 00:04:40.042 CC lib/virtio/virtio.o 00:04:40.042 CC lib/virtio/virtio_vhost_user.o 00:04:40.042 CC lib/fsdev/fsdev.o 00:04:40.042 CC lib/event/app.o 00:04:40.301 CC lib/event/reactor.o 00:04:40.301 CC lib/virtio/virtio_vfio_user.o 00:04:40.301 CC lib/virtio/virtio_pci.o 00:04:40.301 CC lib/event/log_rpc.o 00:04:40.301 CC lib/event/app_rpc.o 00:04:40.559 CC lib/event/scheduler_static.o 00:04:40.559 CC lib/fsdev/fsdev_io.o 00:04:40.559 LIB libspdk_accel.a 00:04:40.559 LIB libspdk_virtio.a 00:04:40.559 SO libspdk_accel.so.16.0 00:04:40.559 CC lib/fsdev/fsdev_rpc.o 00:04:40.559 SO libspdk_virtio.so.7.0 00:04:40.559 SYMLINK libspdk_accel.so 00:04:40.559 SYMLINK libspdk_virtio.so 00:04:40.559 LIB libspdk_event.a 00:04:40.818 SO libspdk_event.so.14.0 00:04:40.818 SYMLINK libspdk_event.so 00:04:40.818 LIB libspdk_nvme.a 00:04:40.818 CC lib/bdev/bdev.o 00:04:40.818 CC lib/bdev/bdev_zone.o 00:04:40.818 CC lib/bdev/bdev_rpc.o 00:04:40.818 CC lib/bdev/scsi_nvme.o 00:04:40.818 CC lib/bdev/part.o 00:04:40.818 LIB libspdk_fsdev.a 00:04:40.818 SO libspdk_fsdev.so.1.0 00:04:41.076 SYMLINK libspdk_fsdev.so 00:04:41.076 SO libspdk_nvme.so.14.0 00:04:41.076 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:41.076 SYMLINK libspdk_nvme.so 00:04:41.648 LIB libspdk_fuse_dispatcher.a 00:04:41.648 SO libspdk_fuse_dispatcher.so.1.0 00:04:41.909 SYMLINK libspdk_fuse_dispatcher.so 00:04:42.479 LIB libspdk_blob.a 00:04:42.479 SO libspdk_blob.so.11.0 00:04:42.739 SYMLINK libspdk_blob.so 00:04:42.739 CC lib/blobfs/blobfs.o 00:04:42.739 CC lib/blobfs/tree.o 00:04:42.739 CC lib/lvol/lvol.o 00:04:43.001 LIB libspdk_bdev.a 00:04:43.001 SO libspdk_bdev.so.16.0 00:04:43.001 SYMLINK libspdk_bdev.so 00:04:43.260 CC lib/nbd/nbd.o 00:04:43.260 CC lib/nbd/nbd_rpc.o 00:04:43.260 CC lib/nvmf/ctrlr.o 00:04:43.260 CC lib/nvmf/ctrlr_discovery.o 00:04:43.260 CC lib/nvmf/ctrlr_bdev.o 00:04:43.260 CC lib/ublk/ublk.o 00:04:43.260 CC lib/scsi/dev.o 00:04:43.260 CC lib/ftl/ftl_core.o 00:04:43.522 CC lib/scsi/lun.o 00:04:43.522 CC lib/ublk/ublk_rpc.o 00:04:43.522 LIB libspdk_nbd.a 00:04:43.522 SO libspdk_nbd.so.7.0 00:04:43.522 SYMLINK libspdk_nbd.so 00:04:43.522 CC lib/scsi/port.o 00:04:43.781 CC lib/ftl/ftl_init.o 00:04:43.781 LIB libspdk_blobfs.a 00:04:43.781 CC lib/scsi/scsi.o 00:04:43.781 SO libspdk_blobfs.so.10.0 00:04:43.781 CC lib/nvmf/subsystem.o 00:04:43.781 CC lib/ftl/ftl_layout.o 00:04:43.781 SYMLINK libspdk_blobfs.so 00:04:43.781 CC lib/ftl/ftl_debug.o 00:04:43.781 CC lib/scsi/scsi_bdev.o 00:04:43.781 LIB libspdk_lvol.a 00:04:43.781 CC lib/scsi/scsi_pr.o 00:04:43.781 SO libspdk_lvol.so.10.0 00:04:43.781 CC lib/scsi/scsi_rpc.o 00:04:43.781 SYMLINK libspdk_lvol.so 00:04:43.781 CC lib/scsi/task.o 00:04:43.781 CC lib/ftl/ftl_io.o 00:04:44.040 LIB libspdk_ublk.a 00:04:44.040 SO libspdk_ublk.so.3.0 00:04:44.040 CC lib/ftl/ftl_sb.o 00:04:44.040 CC lib/ftl/ftl_l2p.o 00:04:44.040 CC lib/nvmf/nvmf.o 00:04:44.040 SYMLINK libspdk_ublk.so 00:04:44.040 CC lib/nvmf/nvmf_rpc.o 00:04:44.040 CC lib/nvmf/transport.o 00:04:44.040 CC lib/ftl/ftl_l2p_flat.o 00:04:44.040 CC lib/nvmf/tcp.o 00:04:44.040 CC lib/nvmf/stubs.o 00:04:44.041 CC lib/nvmf/mdns_server.o 00:04:44.301 LIB libspdk_scsi.a 00:04:44.301 CC lib/ftl/ftl_nv_cache.o 00:04:44.301 SO libspdk_scsi.so.9.0 00:04:44.301 SYMLINK libspdk_scsi.so 00:04:44.301 CC lib/nvmf/rdma.o 00:04:44.561 CC lib/ftl/ftl_band.o 00:04:44.561 CC lib/ftl/ftl_band_ops.o 00:04:44.561 CC lib/ftl/ftl_writer.o 00:04:44.561 CC lib/iscsi/conn.o 00:04:44.820 CC lib/iscsi/init_grp.o 00:04:44.820 CC lib/ftl/ftl_rq.o 00:04:44.820 CC lib/nvmf/auth.o 00:04:44.820 CC lib/ftl/ftl_reloc.o 00:04:44.820 CC lib/vhost/vhost.o 00:04:45.078 CC lib/vhost/vhost_rpc.o 00:04:45.078 CC lib/vhost/vhost_scsi.o 00:04:45.078 CC lib/vhost/vhost_blk.o 00:04:45.335 CC lib/vhost/rte_vhost_user.o 00:04:45.335 CC lib/ftl/ftl_l2p_cache.o 00:04:45.335 CC lib/iscsi/iscsi.o 00:04:45.335 CC lib/ftl/ftl_p2l.o 00:04:45.335 CC lib/ftl/ftl_p2l_log.o 00:04:45.335 CC lib/ftl/mngt/ftl_mngt.o 00:04:45.593 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:45.593 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:45.593 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:45.593 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:45.593 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.853 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.853 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.853 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.853 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.853 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.853 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.853 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.853 CC lib/ftl/utils/ftl_conf.o 00:04:45.853 CC lib/ftl/utils/ftl_md.o 00:04:45.853 CC lib/ftl/utils/ftl_mempool.o 00:04:46.114 CC lib/ftl/utils/ftl_bitmap.o 00:04:46.114 CC lib/iscsi/param.o 00:04:46.114 CC lib/ftl/utils/ftl_property.o 00:04:46.114 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:46.114 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:46.114 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:46.114 LIB libspdk_nvmf.a 00:04:46.114 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:46.114 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:46.114 LIB libspdk_vhost.a 00:04:46.114 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.114 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:46.114 SO libspdk_vhost.so.8.0 00:04:46.372 CC lib/iscsi/portal_grp.o 00:04:46.372 CC lib/iscsi/tgt_node.o 00:04:46.372 SO libspdk_nvmf.so.19.0 00:04:46.372 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.372 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.372 SYMLINK libspdk_vhost.so 00:04:46.372 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.372 CC lib/iscsi/iscsi_subsystem.o 00:04:46.372 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.372 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:46.372 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:46.372 SYMLINK libspdk_nvmf.so 00:04:46.372 CC lib/ftl/base/ftl_base_dev.o 00:04:46.372 CC lib/ftl/base/ftl_base_bdev.o 00:04:46.633 CC lib/ftl/ftl_trace.o 00:04:46.633 CC lib/iscsi/iscsi_rpc.o 00:04:46.633 CC lib/iscsi/task.o 00:04:46.633 LIB libspdk_ftl.a 00:04:46.892 SO libspdk_ftl.so.9.0 00:04:46.892 LIB libspdk_iscsi.a 00:04:46.892 SO libspdk_iscsi.so.8.0 00:04:47.150 SYMLINK libspdk_ftl.so 00:04:47.150 SYMLINK libspdk_iscsi.so 00:04:47.408 CC module/env_dpdk/env_dpdk_rpc.o 00:04:47.408 CC module/accel/error/accel_error.o 00:04:47.408 CC module/sock/posix/posix.o 00:04:47.408 CC module/scheduler/gscheduler/gscheduler.o 00:04:47.408 CC module/accel/ioat/accel_ioat.o 00:04:47.408 CC module/blob/bdev/blob_bdev.o 00:04:47.408 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:47.408 CC module/fsdev/aio/fsdev_aio.o 00:04:47.408 CC module/keyring/file/keyring.o 00:04:47.408 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.408 LIB libspdk_env_dpdk_rpc.a 00:04:47.408 SO libspdk_env_dpdk_rpc.so.6.0 00:04:47.408 SYMLINK libspdk_env_dpdk_rpc.so 00:04:47.408 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:47.408 CC module/accel/error/accel_error_rpc.o 00:04:47.666 CC module/keyring/file/keyring_rpc.o 00:04:47.666 LIB libspdk_scheduler_gscheduler.a 00:04:47.666 LIB libspdk_scheduler_dpdk_governor.a 00:04:47.666 SO libspdk_scheduler_gscheduler.so.4.0 00:04:47.666 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:47.666 CC module/accel/ioat/accel_ioat_rpc.o 00:04:47.666 LIB libspdk_scheduler_dynamic.a 00:04:47.666 SO libspdk_scheduler_dynamic.so.4.0 00:04:47.666 LIB libspdk_blob_bdev.a 00:04:47.666 CC module/fsdev/aio/linux_aio_mgr.o 00:04:47.666 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:47.666 SO libspdk_blob_bdev.so.11.0 00:04:47.666 SYMLINK libspdk_scheduler_gscheduler.so 00:04:47.666 LIB libspdk_accel_error.a 00:04:47.666 LIB libspdk_keyring_file.a 00:04:47.666 SO libspdk_accel_error.so.2.0 00:04:47.666 LIB libspdk_accel_ioat.a 00:04:47.666 SYMLINK libspdk_scheduler_dynamic.so 00:04:47.666 SO libspdk_keyring_file.so.2.0 00:04:47.666 SYMLINK libspdk_blob_bdev.so 00:04:47.666 SO libspdk_accel_ioat.so.6.0 00:04:47.666 SYMLINK libspdk_accel_error.so 00:04:47.666 SYMLINK libspdk_keyring_file.so 00:04:47.666 SYMLINK libspdk_accel_ioat.so 00:04:47.666 CC module/keyring/linux/keyring.o 00:04:47.666 CC module/keyring/linux/keyring_rpc.o 00:04:47.666 CC module/accel/dsa/accel_dsa.o 00:04:47.923 CC module/accel/iaa/accel_iaa.o 00:04:47.923 LIB libspdk_keyring_linux.a 00:04:47.924 CC module/bdev/error/vbdev_error.o 00:04:47.924 CC module/bdev/delay/vbdev_delay.o 00:04:47.924 SO libspdk_keyring_linux.so.1.0 00:04:47.924 CC module/blobfs/bdev/blobfs_bdev.o 00:04:47.924 LIB libspdk_fsdev_aio.a 00:04:47.924 CC module/bdev/gpt/gpt.o 00:04:47.924 SYMLINK libspdk_keyring_linux.so 00:04:47.924 CC module/bdev/gpt/vbdev_gpt.o 00:04:47.924 SO libspdk_fsdev_aio.so.1.0 00:04:47.924 CC module/accel/iaa/accel_iaa_rpc.o 00:04:47.924 LIB libspdk_sock_posix.a 00:04:47.924 CC module/bdev/lvol/vbdev_lvol.o 00:04:47.924 SO libspdk_sock_posix.so.6.0 00:04:47.924 SYMLINK libspdk_fsdev_aio.so 00:04:47.924 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:47.924 CC module/accel/dsa/accel_dsa_rpc.o 00:04:48.181 CC module/bdev/error/vbdev_error_rpc.o 00:04:48.181 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:48.181 LIB libspdk_accel_iaa.a 00:04:48.181 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:48.181 SYMLINK libspdk_sock_posix.so 00:04:48.181 SO libspdk_accel_iaa.so.3.0 00:04:48.181 LIB libspdk_accel_dsa.a 00:04:48.181 LIB libspdk_bdev_gpt.a 00:04:48.181 SYMLINK libspdk_accel_iaa.so 00:04:48.181 SO libspdk_accel_dsa.so.5.0 00:04:48.181 LIB libspdk_blobfs_bdev.a 00:04:48.181 SO libspdk_bdev_gpt.so.6.0 00:04:48.181 SO libspdk_blobfs_bdev.so.6.0 00:04:48.181 CC module/bdev/malloc/bdev_malloc.o 00:04:48.181 SYMLINK libspdk_accel_dsa.so 00:04:48.181 SYMLINK libspdk_bdev_gpt.so 00:04:48.181 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:48.181 LIB libspdk_bdev_error.a 00:04:48.181 LIB libspdk_bdev_delay.a 00:04:48.181 SO libspdk_bdev_error.so.6.0 00:04:48.181 SYMLINK libspdk_blobfs_bdev.so 00:04:48.181 SO libspdk_bdev_delay.so.6.0 00:04:48.181 CC module/bdev/null/bdev_null.o 00:04:48.181 CC module/bdev/nvme/bdev_nvme.o 00:04:48.181 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:48.181 SYMLINK libspdk_bdev_error.so 00:04:48.181 SYMLINK libspdk_bdev_delay.so 00:04:48.439 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.439 CC module/bdev/raid/bdev_raid.o 00:04:48.439 CC module/bdev/raid/bdev_raid_rpc.o 00:04:48.439 CC module/bdev/split/vbdev_split.o 00:04:48.439 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.439 CC module/bdev/null/bdev_null_rpc.o 00:04:48.439 LIB libspdk_bdev_lvol.a 00:04:48.439 SO libspdk_bdev_lvol.so.6.0 00:04:48.439 LIB libspdk_bdev_malloc.a 00:04:48.698 LIB libspdk_bdev_null.a 00:04:48.698 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.698 SYMLINK libspdk_bdev_lvol.so 00:04:48.698 SO libspdk_bdev_malloc.so.6.0 00:04:48.698 SO libspdk_bdev_null.so.6.0 00:04:48.698 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.698 SYMLINK libspdk_bdev_malloc.so 00:04:48.698 SYMLINK libspdk_bdev_null.so 00:04:48.698 CC module/bdev/xnvme/bdev_xnvme.o 00:04:48.698 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:48.698 LIB libspdk_bdev_passthru.a 00:04:48.698 CC module/bdev/aio/bdev_aio.o 00:04:48.698 LIB libspdk_bdev_split.a 00:04:48.698 CC module/bdev/aio/bdev_aio_rpc.o 00:04:48.698 SO libspdk_bdev_passthru.so.6.0 00:04:48.698 CC module/bdev/iscsi/bdev_iscsi.o 00:04:48.698 SO libspdk_bdev_split.so.6.0 00:04:48.698 CC module/bdev/ftl/bdev_ftl.o 00:04:48.698 SYMLINK libspdk_bdev_passthru.so 00:04:48.698 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:48.698 SYMLINK libspdk_bdev_split.so 00:04:48.956 CC module/bdev/raid/bdev_raid_sb.o 00:04:48.956 LIB libspdk_bdev_zone_block.a 00:04:48.956 SO libspdk_bdev_zone_block.so.6.0 00:04:48.956 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:48.956 SYMLINK libspdk_bdev_zone_block.so 00:04:48.956 CC module/bdev/raid/raid0.o 00:04:48.956 CC module/bdev/raid/raid1.o 00:04:48.956 LIB libspdk_bdev_xnvme.a 00:04:48.956 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:48.956 CC module/bdev/raid/concat.o 00:04:48.956 LIB libspdk_bdev_aio.a 00:04:48.956 SO libspdk_bdev_xnvme.so.3.0 00:04:48.956 LIB libspdk_bdev_ftl.a 00:04:48.956 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:48.956 SO libspdk_bdev_aio.so.6.0 00:04:48.956 SO libspdk_bdev_ftl.so.6.0 00:04:49.215 SYMLINK libspdk_bdev_xnvme.so 00:04:49.215 CC module/bdev/nvme/nvme_rpc.o 00:04:49.215 SYMLINK libspdk_bdev_ftl.so 00:04:49.215 SYMLINK libspdk_bdev_aio.so 00:04:49.215 CC module/bdev/nvme/bdev_mdns_client.o 00:04:49.215 CC module/bdev/nvme/vbdev_opal.o 00:04:49.215 LIB libspdk_bdev_iscsi.a 00:04:49.215 SO libspdk_bdev_iscsi.so.6.0 00:04:49.215 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:49.215 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:49.215 SYMLINK libspdk_bdev_iscsi.so 00:04:49.215 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:49.215 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:49.215 LIB libspdk_bdev_raid.a 00:04:49.215 SO libspdk_bdev_raid.so.6.0 00:04:49.215 SYMLINK libspdk_bdev_raid.so 00:04:49.475 LIB libspdk_bdev_virtio.a 00:04:49.475 SO libspdk_bdev_virtio.so.6.0 00:04:49.475 SYMLINK libspdk_bdev_virtio.so 00:04:50.470 LIB libspdk_bdev_nvme.a 00:04:50.470 SO libspdk_bdev_nvme.so.7.0 00:04:50.470 SYMLINK libspdk_bdev_nvme.so 00:04:50.730 CC module/event/subsystems/vmd/vmd.o 00:04:50.730 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:50.730 CC module/event/subsystems/fsdev/fsdev.o 00:04:50.730 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:50.730 CC module/event/subsystems/iobuf/iobuf.o 00:04:50.730 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:50.730 CC module/event/subsystems/keyring/keyring.o 00:04:50.730 CC module/event/subsystems/scheduler/scheduler.o 00:04:50.730 CC module/event/subsystems/sock/sock.o 00:04:50.992 LIB libspdk_event_keyring.a 00:04:50.992 LIB libspdk_event_scheduler.a 00:04:50.992 LIB libspdk_event_vhost_blk.a 00:04:50.992 LIB libspdk_event_vmd.a 00:04:50.992 LIB libspdk_event_fsdev.a 00:04:50.992 SO libspdk_event_keyring.so.1.0 00:04:50.992 SO libspdk_event_scheduler.so.4.0 00:04:50.992 LIB libspdk_event_sock.a 00:04:50.992 SO libspdk_event_vhost_blk.so.3.0 00:04:50.992 LIB libspdk_event_iobuf.a 00:04:50.992 SO libspdk_event_fsdev.so.1.0 00:04:50.992 SO libspdk_event_vmd.so.6.0 00:04:50.992 SO libspdk_event_sock.so.5.0 00:04:50.992 SO libspdk_event_iobuf.so.3.0 00:04:50.992 SYMLINK libspdk_event_keyring.so 00:04:50.992 SYMLINK libspdk_event_scheduler.so 00:04:50.992 SYMLINK libspdk_event_vhost_blk.so 00:04:50.992 SYMLINK libspdk_event_fsdev.so 00:04:50.992 SYMLINK libspdk_event_vmd.so 00:04:50.992 SYMLINK libspdk_event_sock.so 00:04:50.992 SYMLINK libspdk_event_iobuf.so 00:04:51.251 CC module/event/subsystems/accel/accel.o 00:04:51.251 LIB libspdk_event_accel.a 00:04:51.511 SO libspdk_event_accel.so.6.0 00:04:51.511 SYMLINK libspdk_event_accel.so 00:04:51.771 CC module/event/subsystems/bdev/bdev.o 00:04:51.771 LIB libspdk_event_bdev.a 00:04:51.771 SO libspdk_event_bdev.so.6.0 00:04:52.031 SYMLINK libspdk_event_bdev.so 00:04:52.031 CC module/event/subsystems/nbd/nbd.o 00:04:52.031 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:52.031 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.031 CC module/event/subsystems/scsi/scsi.o 00:04:52.031 CC module/event/subsystems/ublk/ublk.o 00:04:52.291 LIB libspdk_event_nbd.a 00:04:52.291 SO libspdk_event_nbd.so.6.0 00:04:52.291 LIB libspdk_event_ublk.a 00:04:52.291 LIB libspdk_event_scsi.a 00:04:52.291 SO libspdk_event_ublk.so.3.0 00:04:52.291 SYMLINK libspdk_event_nbd.so 00:04:52.291 SO libspdk_event_scsi.so.6.0 00:04:52.291 SYMLINK libspdk_event_ublk.so 00:04:52.291 LIB libspdk_event_nvmf.a 00:04:52.291 SYMLINK libspdk_event_scsi.so 00:04:52.291 SO libspdk_event_nvmf.so.6.0 00:04:52.291 SYMLINK libspdk_event_nvmf.so 00:04:52.551 CC module/event/subsystems/iscsi/iscsi.o 00:04:52.551 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:52.551 LIB libspdk_event_iscsi.a 00:04:52.551 LIB libspdk_event_vhost_scsi.a 00:04:52.551 SO libspdk_event_iscsi.so.6.0 00:04:52.551 SO libspdk_event_vhost_scsi.so.3.0 00:04:52.813 SYMLINK libspdk_event_vhost_scsi.so 00:04:52.813 SYMLINK libspdk_event_iscsi.so 00:04:52.813 SO libspdk.so.6.0 00:04:52.813 SYMLINK libspdk.so 00:04:53.072 CC test/rpc_client/rpc_client_test.o 00:04:53.072 TEST_HEADER include/spdk/accel.h 00:04:53.072 TEST_HEADER include/spdk/accel_module.h 00:04:53.072 TEST_HEADER include/spdk/assert.h 00:04:53.072 TEST_HEADER include/spdk/barrier.h 00:04:53.072 TEST_HEADER include/spdk/base64.h 00:04:53.072 TEST_HEADER include/spdk/bdev.h 00:04:53.072 TEST_HEADER include/spdk/bdev_module.h 00:04:53.072 CXX app/trace/trace.o 00:04:53.072 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.072 TEST_HEADER include/spdk/bit_array.h 00:04:53.072 TEST_HEADER include/spdk/bit_pool.h 00:04:53.072 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.072 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.072 TEST_HEADER include/spdk/blobfs.h 00:04:53.072 TEST_HEADER include/spdk/blob.h 00:04:53.072 TEST_HEADER include/spdk/conf.h 00:04:53.072 TEST_HEADER include/spdk/config.h 00:04:53.072 TEST_HEADER include/spdk/cpuset.h 00:04:53.072 TEST_HEADER include/spdk/crc16.h 00:04:53.072 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.072 TEST_HEADER include/spdk/crc32.h 00:04:53.072 TEST_HEADER include/spdk/crc64.h 00:04:53.072 TEST_HEADER include/spdk/dif.h 00:04:53.072 TEST_HEADER include/spdk/dma.h 00:04:53.072 TEST_HEADER include/spdk/endian.h 00:04:53.072 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.072 TEST_HEADER include/spdk/env.h 00:04:53.072 TEST_HEADER include/spdk/event.h 00:04:53.072 TEST_HEADER include/spdk/fd_group.h 00:04:53.072 TEST_HEADER include/spdk/fd.h 00:04:53.072 TEST_HEADER include/spdk/file.h 00:04:53.072 TEST_HEADER include/spdk/fsdev.h 00:04:53.072 TEST_HEADER include/spdk/fsdev_module.h 00:04:53.072 TEST_HEADER include/spdk/ftl.h 00:04:53.072 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:53.072 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.072 TEST_HEADER include/spdk/hexlify.h 00:04:53.072 TEST_HEADER include/spdk/histogram_data.h 00:04:53.072 TEST_HEADER include/spdk/idxd.h 00:04:53.072 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.072 TEST_HEADER include/spdk/init.h 00:04:53.072 CC examples/util/zipf/zipf.o 00:04:53.072 TEST_HEADER include/spdk/ioat.h 00:04:53.072 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.072 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.072 TEST_HEADER include/spdk/json.h 00:04:53.072 CC test/thread/poller_perf/poller_perf.o 00:04:53.072 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.072 TEST_HEADER include/spdk/keyring.h 00:04:53.072 TEST_HEADER include/spdk/keyring_module.h 00:04:53.072 TEST_HEADER include/spdk/likely.h 00:04:53.072 CC examples/ioat/perf/perf.o 00:04:53.072 TEST_HEADER include/spdk/log.h 00:04:53.072 TEST_HEADER include/spdk/lvol.h 00:04:53.072 TEST_HEADER include/spdk/md5.h 00:04:53.072 TEST_HEADER include/spdk/memory.h 00:04:53.072 TEST_HEADER include/spdk/mmio.h 00:04:53.072 TEST_HEADER include/spdk/nbd.h 00:04:53.072 CC test/dma/test_dma/test_dma.o 00:04:53.072 TEST_HEADER include/spdk/net.h 00:04:53.072 TEST_HEADER include/spdk/notify.h 00:04:53.072 TEST_HEADER include/spdk/nvme.h 00:04:53.072 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.072 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.072 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.072 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.072 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.072 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.072 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.072 TEST_HEADER include/spdk/nvmf.h 00:04:53.072 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.072 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.072 CC test/app/bdev_svc/bdev_svc.o 00:04:53.072 TEST_HEADER include/spdk/opal.h 00:04:53.072 TEST_HEADER include/spdk/opal_spec.h 00:04:53.072 CC test/env/mem_callbacks/mem_callbacks.o 00:04:53.072 TEST_HEADER include/spdk/pci_ids.h 00:04:53.072 TEST_HEADER include/spdk/pipe.h 00:04:53.072 TEST_HEADER include/spdk/queue.h 00:04:53.072 TEST_HEADER include/spdk/reduce.h 00:04:53.072 TEST_HEADER include/spdk/rpc.h 00:04:53.072 TEST_HEADER include/spdk/scheduler.h 00:04:53.072 TEST_HEADER include/spdk/scsi.h 00:04:53.072 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.072 TEST_HEADER include/spdk/sock.h 00:04:53.072 TEST_HEADER include/spdk/stdinc.h 00:04:53.072 TEST_HEADER include/spdk/string.h 00:04:53.072 TEST_HEADER include/spdk/thread.h 00:04:53.072 TEST_HEADER include/spdk/trace.h 00:04:53.072 TEST_HEADER include/spdk/trace_parser.h 00:04:53.072 TEST_HEADER include/spdk/tree.h 00:04:53.072 TEST_HEADER include/spdk/ublk.h 00:04:53.072 TEST_HEADER include/spdk/util.h 00:04:53.333 TEST_HEADER include/spdk/uuid.h 00:04:53.333 TEST_HEADER include/spdk/version.h 00:04:53.333 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.333 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.333 TEST_HEADER include/spdk/vhost.h 00:04:53.333 TEST_HEADER include/spdk/vmd.h 00:04:53.333 TEST_HEADER include/spdk/xor.h 00:04:53.333 TEST_HEADER include/spdk/zipf.h 00:04:53.333 CXX test/cpp_headers/accel.o 00:04:53.333 LINK rpc_client_test 00:04:53.333 LINK interrupt_tgt 00:04:53.333 LINK zipf 00:04:53.333 LINK poller_perf 00:04:53.333 LINK bdev_svc 00:04:53.333 LINK mem_callbacks 00:04:53.333 LINK ioat_perf 00:04:53.333 CXX test/cpp_headers/accel_module.o 00:04:53.333 LINK spdk_trace 00:04:53.333 CC app/trace_record/trace_record.o 00:04:53.594 CC app/iscsi_tgt/iscsi_tgt.o 00:04:53.594 CXX test/cpp_headers/assert.o 00:04:53.594 CC app/nvmf_tgt/nvmf_main.o 00:04:53.594 CC test/env/vtophys/vtophys.o 00:04:53.594 CC app/spdk_tgt/spdk_tgt.o 00:04:53.594 CC examples/ioat/verify/verify.o 00:04:53.594 LINK test_dma 00:04:53.594 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:53.594 CXX test/cpp_headers/barrier.o 00:04:53.594 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:53.594 LINK vtophys 00:04:53.594 LINK nvmf_tgt 00:04:53.594 LINK spdk_trace_record 00:04:53.855 LINK iscsi_tgt 00:04:53.855 LINK spdk_tgt 00:04:53.855 LINK verify 00:04:53.855 CXX test/cpp_headers/base64.o 00:04:53.855 CXX test/cpp_headers/bdev.o 00:04:53.855 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:53.855 CXX test/cpp_headers/bdev_module.o 00:04:53.855 CC test/env/memory/memory_ut.o 00:04:53.855 CC test/env/pci/pci_ut.o 00:04:53.855 CC app/spdk_lspci/spdk_lspci.o 00:04:54.115 LINK nvme_fuzz 00:04:54.115 CXX test/cpp_headers/bdev_zone.o 00:04:54.115 LINK env_dpdk_post_init 00:04:54.115 LINK spdk_lspci 00:04:54.115 CC test/event/event_perf/event_perf.o 00:04:54.115 CC examples/thread/thread/thread_ex.o 00:04:54.115 CC test/event/reactor/reactor.o 00:04:54.375 CXX test/cpp_headers/bit_array.o 00:04:54.375 LINK event_perf 00:04:54.375 CC test/event/reactor_perf/reactor_perf.o 00:04:54.375 LINK reactor 00:04:54.375 CC app/spdk_nvme_perf/perf.o 00:04:54.375 CC test/event/app_repeat/app_repeat.o 00:04:54.375 LINK pci_ut 00:04:54.375 LINK thread 00:04:54.375 CXX test/cpp_headers/bit_pool.o 00:04:54.375 CXX test/cpp_headers/blob_bdev.o 00:04:54.375 LINK reactor_perf 00:04:54.635 LINK app_repeat 00:04:54.635 CC test/nvme/aer/aer.o 00:04:54.635 CXX test/cpp_headers/blobfs_bdev.o 00:04:54.635 CXX test/cpp_headers/blobfs.o 00:04:54.635 CC test/event/scheduler/scheduler.o 00:04:54.894 LINK memory_ut 00:04:54.894 CC test/accel/dif/dif.o 00:04:54.894 CC examples/vmd/lsvmd/lsvmd.o 00:04:54.894 CC examples/sock/hello_world/hello_sock.o 00:04:54.894 CXX test/cpp_headers/blob.o 00:04:54.894 LINK aer 00:04:54.894 CC examples/vmd/led/led.o 00:04:54.894 LINK lsvmd 00:04:54.894 CXX test/cpp_headers/conf.o 00:04:54.894 LINK scheduler 00:04:55.154 CC test/nvme/reset/reset.o 00:04:55.154 LINK led 00:04:55.154 LINK hello_sock 00:04:55.154 CC test/nvme/sgl/sgl.o 00:04:55.154 CXX test/cpp_headers/config.o 00:04:55.154 CC test/nvme/e2edp/nvme_dp.o 00:04:55.154 CXX test/cpp_headers/cpuset.o 00:04:55.154 LINK spdk_nvme_perf 00:04:55.154 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:55.416 LINK reset 00:04:55.416 CC test/nvme/overhead/overhead.o 00:04:55.416 CXX test/cpp_headers/crc16.o 00:04:55.416 CC examples/idxd/perf/perf.o 00:04:55.416 LINK sgl 00:04:55.416 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:55.416 LINK nvme_dp 00:04:55.416 CC app/spdk_nvme_identify/identify.o 00:04:55.416 CXX test/cpp_headers/crc32.o 00:04:55.676 CC test/nvme/err_injection/err_injection.o 00:04:55.676 CXX test/cpp_headers/crc64.o 00:04:55.676 LINK dif 00:04:55.676 LINK iscsi_fuzz 00:04:55.676 CXX test/cpp_headers/dif.o 00:04:55.676 LINK overhead 00:04:55.676 LINK err_injection 00:04:55.676 CC test/nvme/startup/startup.o 00:04:55.676 LINK idxd_perf 00:04:55.676 CC test/nvme/reserve/reserve.o 00:04:55.937 CXX test/cpp_headers/dma.o 00:04:55.937 CC test/nvme/simple_copy/simple_copy.o 00:04:55.937 LINK vhost_fuzz 00:04:55.937 CC test/nvme/connect_stress/connect_stress.o 00:04:55.937 LINK startup 00:04:55.937 CC test/app/histogram_perf/histogram_perf.o 00:04:55.937 CXX test/cpp_headers/endian.o 00:04:55.937 LINK reserve 00:04:55.937 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:56.197 CC examples/accel/perf/accel_perf.o 00:04:56.197 LINK connect_stress 00:04:56.197 LINK simple_copy 00:04:56.197 LINK histogram_perf 00:04:56.197 CXX test/cpp_headers/env_dpdk.o 00:04:56.197 CC examples/blob/hello_world/hello_blob.o 00:04:56.197 CC examples/nvme/hello_world/hello_world.o 00:04:56.197 CC examples/nvme/reconnect/reconnect.o 00:04:56.197 CXX test/cpp_headers/env.o 00:04:56.457 LINK hello_fsdev 00:04:56.457 CC test/app/jsoncat/jsoncat.o 00:04:56.457 CC test/nvme/boot_partition/boot_partition.o 00:04:56.457 LINK spdk_nvme_identify 00:04:56.457 CC test/blobfs/mkfs/mkfs.o 00:04:56.457 LINK hello_blob 00:04:56.457 CXX test/cpp_headers/event.o 00:04:56.457 LINK hello_world 00:04:56.457 LINK jsoncat 00:04:56.457 LINK boot_partition 00:04:56.718 CXX test/cpp_headers/fd_group.o 00:04:56.718 LINK reconnect 00:04:56.718 CC test/app/stub/stub.o 00:04:56.718 LINK mkfs 00:04:56.718 CC app/spdk_nvme_discover/discovery_aer.o 00:04:56.718 LINK accel_perf 00:04:56.718 CC test/nvme/compliance/nvme_compliance.o 00:04:56.718 CC examples/blob/cli/blobcli.o 00:04:56.718 CC test/nvme/fused_ordering/fused_ordering.o 00:04:56.718 CXX test/cpp_headers/fd.o 00:04:56.718 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.718 CXX test/cpp_headers/file.o 00:04:56.718 LINK stub 00:04:56.718 CXX test/cpp_headers/fsdev.o 00:04:56.718 LINK spdk_nvme_discover 00:04:56.718 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:56.979 LINK fused_ordering 00:04:56.979 CXX test/cpp_headers/fsdev_module.o 00:04:56.979 CXX test/cpp_headers/ftl.o 00:04:56.979 CC test/nvme/fdp/fdp.o 00:04:56.979 LINK doorbell_aers 00:04:56.979 CC test/nvme/cuse/cuse.o 00:04:56.979 CC app/spdk_top/spdk_top.o 00:04:56.979 LINK nvme_compliance 00:04:56.979 CXX test/cpp_headers/fuse_dispatcher.o 00:04:57.240 LINK blobcli 00:04:57.240 CC app/spdk_dd/spdk_dd.o 00:04:57.240 CXX test/cpp_headers/gpt_spec.o 00:04:57.240 CC app/vhost/vhost.o 00:04:57.240 CC examples/nvme/arbitration/arbitration.o 00:04:57.240 CC app/fio/nvme/fio_plugin.o 00:04:57.240 LINK fdp 00:04:57.240 CXX test/cpp_headers/hexlify.o 00:04:57.240 LINK nvme_manage 00:04:57.499 LINK vhost 00:04:57.499 CC examples/bdev/hello_world/hello_bdev.o 00:04:57.499 CXX test/cpp_headers/histogram_data.o 00:04:57.499 CC app/fio/bdev/fio_plugin.o 00:04:57.499 LINK arbitration 00:04:57.499 LINK spdk_dd 00:04:57.499 CC examples/nvme/hotplug/hotplug.o 00:04:57.499 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:57.499 CXX test/cpp_headers/idxd.o 00:04:57.757 LINK hello_bdev 00:04:57.757 CC examples/nvme/abort/abort.o 00:04:57.757 CXX test/cpp_headers/idxd_spec.o 00:04:57.757 LINK cmb_copy 00:04:57.757 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:57.757 LINK hotplug 00:04:57.757 LINK spdk_nvme 00:04:57.757 CXX test/cpp_headers/init.o 00:04:58.016 CXX test/cpp_headers/ioat.o 00:04:58.016 LINK spdk_top 00:04:58.016 LINK pmr_persistence 00:04:58.016 CC examples/bdev/bdevperf/bdevperf.o 00:04:58.016 LINK spdk_bdev 00:04:58.016 CXX test/cpp_headers/ioat_spec.o 00:04:58.016 LINK cuse 00:04:58.016 CXX test/cpp_headers/iscsi_spec.o 00:04:58.016 CXX test/cpp_headers/json.o 00:04:58.016 CXX test/cpp_headers/jsonrpc.o 00:04:58.016 CC test/bdev/bdevio/bdevio.o 00:04:58.016 LINK abort 00:04:58.016 CC test/lvol/esnap/esnap.o 00:04:58.016 CXX test/cpp_headers/keyring.o 00:04:58.274 CXX test/cpp_headers/keyring_module.o 00:04:58.274 CXX test/cpp_headers/likely.o 00:04:58.274 CXX test/cpp_headers/log.o 00:04:58.274 CXX test/cpp_headers/lvol.o 00:04:58.274 CXX test/cpp_headers/md5.o 00:04:58.274 CXX test/cpp_headers/memory.o 00:04:58.274 CXX test/cpp_headers/mmio.o 00:04:58.274 CXX test/cpp_headers/nbd.o 00:04:58.274 CXX test/cpp_headers/net.o 00:04:58.274 CXX test/cpp_headers/notify.o 00:04:58.274 CXX test/cpp_headers/nvme.o 00:04:58.274 CXX test/cpp_headers/nvme_intel.o 00:04:58.274 CXX test/cpp_headers/nvme_ocssd.o 00:04:58.274 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:58.274 CXX test/cpp_headers/nvme_spec.o 00:04:58.532 LINK bdevio 00:04:58.532 CXX test/cpp_headers/nvme_zns.o 00:04:58.532 CXX test/cpp_headers/nvmf_cmd.o 00:04:58.532 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:58.532 CXX test/cpp_headers/nvmf.o 00:04:58.532 CXX test/cpp_headers/nvmf_spec.o 00:04:58.532 CXX test/cpp_headers/nvmf_transport.o 00:04:58.532 CXX test/cpp_headers/opal.o 00:04:58.532 CXX test/cpp_headers/opal_spec.o 00:04:58.532 CXX test/cpp_headers/pci_ids.o 00:04:58.532 CXX test/cpp_headers/pipe.o 00:04:58.532 CXX test/cpp_headers/queue.o 00:04:58.532 CXX test/cpp_headers/reduce.o 00:04:58.532 CXX test/cpp_headers/rpc.o 00:04:58.790 CXX test/cpp_headers/scheduler.o 00:04:58.790 CXX test/cpp_headers/scsi.o 00:04:58.790 CXX test/cpp_headers/scsi_spec.o 00:04:58.790 CXX test/cpp_headers/sock.o 00:04:58.790 LINK bdevperf 00:04:58.790 CXX test/cpp_headers/stdinc.o 00:04:58.790 CXX test/cpp_headers/string.o 00:04:58.790 CXX test/cpp_headers/thread.o 00:04:58.790 CXX test/cpp_headers/trace.o 00:04:58.790 CXX test/cpp_headers/trace_parser.o 00:04:58.790 CXX test/cpp_headers/tree.o 00:04:58.790 CXX test/cpp_headers/ublk.o 00:04:58.790 CXX test/cpp_headers/util.o 00:04:58.790 CXX test/cpp_headers/uuid.o 00:04:58.790 CXX test/cpp_headers/version.o 00:04:58.790 CXX test/cpp_headers/vfio_user_pci.o 00:04:58.790 CXX test/cpp_headers/vfio_user_spec.o 00:04:58.790 CXX test/cpp_headers/vhost.o 00:04:58.790 CXX test/cpp_headers/vmd.o 00:04:59.050 CXX test/cpp_headers/xor.o 00:04:59.050 CXX test/cpp_headers/zipf.o 00:04:59.050 CC examples/nvmf/nvmf/nvmf.o 00:04:59.312 LINK nvmf 00:05:02.619 LINK esnap 00:05:02.881 00:05:02.881 real 1m1.770s 00:05:02.881 user 5m2.485s 00:05:02.881 sys 0m52.354s 00:05:02.881 09:59:31 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:02.881 09:59:31 make -- common/autotest_common.sh@10 -- $ set +x 00:05:02.881 ************************************ 00:05:02.881 END TEST make 00:05:02.881 ************************************ 00:05:02.881 09:59:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:02.881 09:59:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:02.881 09:59:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:02.881 09:59:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.881 09:59:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:02.881 09:59:31 -- pm/common@44 -- $ pid=5798 00:05:02.881 09:59:31 -- pm/common@50 -- $ kill -TERM 5798 00:05:02.881 09:59:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.881 09:59:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:02.881 09:59:31 -- pm/common@44 -- $ pid=5799 00:05:02.881 09:59:31 -- pm/common@50 -- $ kill -TERM 5799 00:05:03.141 09:59:31 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.141 09:59:31 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.141 09:59:31 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.141 09:59:31 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.141 09:59:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.141 09:59:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.141 09:59:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.141 09:59:31 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.141 09:59:31 -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.141 09:59:31 -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.141 09:59:31 -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.141 09:59:31 -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.141 09:59:31 -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.141 09:59:31 -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.141 09:59:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.141 09:59:31 -- scripts/common.sh@344 -- # case "$op" in 00:05:03.141 09:59:31 -- scripts/common.sh@345 -- # : 1 00:05:03.141 09:59:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.141 09:59:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.141 09:59:31 -- scripts/common.sh@365 -- # decimal 1 00:05:03.141 09:59:31 -- scripts/common.sh@353 -- # local d=1 00:05:03.141 09:59:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.141 09:59:31 -- scripts/common.sh@355 -- # echo 1 00:05:03.141 09:59:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.141 09:59:31 -- scripts/common.sh@366 -- # decimal 2 00:05:03.141 09:59:31 -- scripts/common.sh@353 -- # local d=2 00:05:03.141 09:59:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.141 09:59:31 -- scripts/common.sh@355 -- # echo 2 00:05:03.141 09:59:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.141 09:59:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.141 09:59:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.141 09:59:31 -- scripts/common.sh@368 -- # return 0 00:05:03.141 09:59:31 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.141 09:59:31 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.141 --rc genhtml_branch_coverage=1 00:05:03.141 --rc genhtml_function_coverage=1 00:05:03.141 --rc genhtml_legend=1 00:05:03.141 --rc geninfo_all_blocks=1 00:05:03.141 --rc geninfo_unexecuted_blocks=1 00:05:03.141 00:05:03.141 ' 00:05:03.141 09:59:31 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.141 --rc genhtml_branch_coverage=1 00:05:03.141 --rc genhtml_function_coverage=1 00:05:03.141 --rc genhtml_legend=1 00:05:03.141 --rc geninfo_all_blocks=1 00:05:03.141 --rc geninfo_unexecuted_blocks=1 00:05:03.141 00:05:03.141 ' 00:05:03.141 09:59:31 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.142 --rc genhtml_branch_coverage=1 00:05:03.142 --rc genhtml_function_coverage=1 00:05:03.142 --rc genhtml_legend=1 00:05:03.142 --rc geninfo_all_blocks=1 00:05:03.142 --rc geninfo_unexecuted_blocks=1 00:05:03.142 00:05:03.142 ' 00:05:03.142 09:59:31 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.142 --rc genhtml_branch_coverage=1 00:05:03.142 --rc genhtml_function_coverage=1 00:05:03.142 --rc genhtml_legend=1 00:05:03.142 --rc geninfo_all_blocks=1 00:05:03.142 --rc geninfo_unexecuted_blocks=1 00:05:03.142 00:05:03.142 ' 00:05:03.142 09:59:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.142 09:59:31 -- nvmf/common.sh@7 -- # uname -s 00:05:03.142 09:59:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.142 09:59:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.142 09:59:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.142 09:59:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.142 09:59:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.142 09:59:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.142 09:59:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.142 09:59:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.142 09:59:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.142 09:59:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.142 09:59:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:314d78e8-5733-4db7-8146-5db03a3e62c6 00:05:03.142 09:59:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=314d78e8-5733-4db7-8146-5db03a3e62c6 00:05:03.142 09:59:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.142 09:59:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.142 09:59:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.142 09:59:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.142 09:59:31 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.142 09:59:31 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.142 09:59:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.142 09:59:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.142 09:59:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.142 09:59:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.142 09:59:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.142 09:59:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.142 09:59:31 -- paths/export.sh@5 -- # export PATH 00:05:03.142 09:59:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.142 09:59:31 -- nvmf/common.sh@51 -- # : 0 00:05:03.142 09:59:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.142 09:59:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.142 09:59:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.142 09:59:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.142 09:59:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.142 09:59:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.142 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.142 09:59:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.142 09:59:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.142 09:59:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.142 09:59:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:03.142 09:59:31 -- spdk/autotest.sh@32 -- # uname -s 00:05:03.142 09:59:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:03.142 09:59:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:03.142 09:59:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:03.142 09:59:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:03.142 09:59:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:03.142 09:59:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:03.142 09:59:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:03.142 09:59:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:03.142 09:59:31 -- spdk/autotest.sh@48 -- # udevadm_pid=66607 00:05:03.142 09:59:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:03.142 09:59:31 -- pm/common@17 -- # local monitor 00:05:03.142 09:59:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.142 09:59:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:03.142 09:59:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.142 09:59:31 -- pm/common@25 -- # sleep 1 00:05:03.142 09:59:31 -- pm/common@21 -- # date +%s 00:05:03.142 09:59:31 -- pm/common@21 -- # date +%s 00:05:03.142 09:59:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730627971 00:05:03.142 09:59:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730627971 00:05:03.142 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730627971_collect-cpu-load.pm.log 00:05:03.142 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730627971_collect-vmstat.pm.log 00:05:04.087 09:59:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:04.087 09:59:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:04.087 09:59:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.087 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.348 09:59:32 -- spdk/autotest.sh@59 -- # create_test_list 00:05:04.348 09:59:32 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:04.348 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.348 09:59:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:04.348 09:59:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:04.348 09:59:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:04.348 09:59:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:04.348 09:59:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:04.348 09:59:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:04.348 09:59:32 -- common/autotest_common.sh@1455 -- # uname 00:05:04.348 09:59:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:04.348 09:59:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:04.348 09:59:32 -- common/autotest_common.sh@1475 -- # uname 00:05:04.348 09:59:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:04.348 09:59:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:04.348 09:59:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:04.348 lcov: LCOV version 1.15 00:05:04.348 09:59:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:19.262 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:19.262 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:34.172 10:00:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:34.172 10:00:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.172 10:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:34.172 10:00:02 -- spdk/autotest.sh@78 -- # rm -f 00:05:34.172 10:00:02 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.995 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:34.995 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:34.995 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:34.995 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:34.995 10:00:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:34.995 10:00:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:34.995 10:00:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:34.995 10:00:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:34.995 10:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:34.995 10:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:34.995 10:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:34.995 10:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2c2n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme2c2n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:34.995 10:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:34.995 10:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:34.995 10:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:34.995 10:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:05:34.995 10:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:05:34.995 10:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:34.995 10:00:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:34.995 10:00:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:05:34.995 10:00:03 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:05:34.996 10:00:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:05:34.996 10:00:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:34.996 10:00:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:34.996 10:00:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:34.996 10:00:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:34.996 10:00:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:34.996 10:00:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:34.996 10:00:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:34.996 No valid GPT data, bailing 00:05:34.996 10:00:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:34.996 10:00:03 -- scripts/common.sh@394 -- # pt= 00:05:34.996 10:00:03 -- scripts/common.sh@395 -- # return 1 00:05:34.996 10:00:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:34.996 1+0 records in 00:05:34.996 1+0 records out 00:05:34.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308242 s, 34.0 MB/s 00:05:34.996 10:00:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:34.996 10:00:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:34.996 10:00:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:34.996 10:00:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:34.996 10:00:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:34.996 No valid GPT data, bailing 00:05:34.996 10:00:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:34.996 10:00:03 -- scripts/common.sh@394 -- # pt= 00:05:34.996 10:00:03 -- scripts/common.sh@395 -- # return 1 00:05:34.996 10:00:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:34.996 1+0 records in 00:05:34.996 1+0 records out 00:05:34.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00565302 s, 185 MB/s 00:05:34.996 10:00:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:34.996 10:00:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:34.996 10:00:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:34.996 10:00:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:34.996 10:00:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:35.256 No valid GPT data, bailing 00:05:35.256 10:00:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:35.256 10:00:03 -- scripts/common.sh@394 -- # pt= 00:05:35.256 10:00:03 -- scripts/common.sh@395 -- # return 1 00:05:35.256 10:00:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:35.256 1+0 records in 00:05:35.256 1+0 records out 00:05:35.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529677 s, 198 MB/s 00:05:35.256 10:00:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.256 10:00:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.256 10:00:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:35.256 10:00:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:35.256 10:00:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:35.256 No valid GPT data, bailing 00:05:35.256 10:00:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:35.256 10:00:03 -- scripts/common.sh@394 -- # pt= 00:05:35.256 10:00:03 -- scripts/common.sh@395 -- # return 1 00:05:35.256 10:00:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:35.256 1+0 records in 00:05:35.256 1+0 records out 00:05:35.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571667 s, 183 MB/s 00:05:35.256 10:00:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.256 10:00:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.256 10:00:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:05:35.256 10:00:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:05:35.256 10:00:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:05:35.256 No valid GPT data, bailing 00:05:35.256 10:00:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:05:35.256 10:00:03 -- scripts/common.sh@394 -- # pt= 00:05:35.256 10:00:03 -- scripts/common.sh@395 -- # return 1 00:05:35.256 10:00:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:05:35.256 1+0 records in 00:05:35.256 1+0 records out 00:05:35.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00562092 s, 187 MB/s 00:05:35.256 10:00:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.256 10:00:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.256 10:00:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:05:35.256 10:00:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:05:35.256 10:00:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:05:35.256 No valid GPT data, bailing 00:05:35.256 10:00:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:05:35.515 10:00:03 -- scripts/common.sh@394 -- # pt= 00:05:35.515 10:00:03 -- scripts/common.sh@395 -- # return 1 00:05:35.515 10:00:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:05:35.515 1+0 records in 00:05:35.515 1+0 records out 00:05:35.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543212 s, 193 MB/s 00:05:35.515 10:00:03 -- spdk/autotest.sh@105 -- # sync 00:05:35.515 10:00:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:35.515 10:00:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:35.515 10:00:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:36.919 10:00:05 -- spdk/autotest.sh@111 -- # uname -s 00:05:36.919 10:00:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:36.919 10:00:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:36.919 10:00:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:37.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.743 Hugepages 00:05:37.743 node hugesize free / total 00:05:37.743 node0 1048576kB 0 / 0 00:05:37.743 node0 2048kB 0 / 0 00:05:37.743 00:05:37.743 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:37.743 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:38.001 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:38.001 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:38.001 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:05:38.001 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:38.001 10:00:06 -- spdk/autotest.sh@117 -- # uname -s 00:05:38.001 10:00:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:38.001 10:00:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:38.001 10:00:06 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.135 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.135 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.135 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.135 10:00:07 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:40.511 10:00:08 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:40.511 10:00:08 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:40.511 10:00:08 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:40.511 10:00:08 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:40.511 10:00:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:40.511 10:00:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:40.511 10:00:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.511 10:00:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:40.511 10:00:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:40.511 10:00:08 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:40.511 10:00:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:40.511 10:00:08 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.770 Waiting for block devices as requested 00:05:40.770 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.770 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.029 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.029 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:46.298 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:46.298 10:00:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.298 10:00:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.298 10:00:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.298 10:00:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:46.298 10:00:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:46.298 10:00:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.298 10:00:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.298 10:00:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.298 10:00:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1541 -- # continue 00:05:46.298 10:00:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.298 10:00:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.298 10:00:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.298 10:00:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.298 10:00:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1541 -- # continue 00:05:46.298 10:00:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.298 10:00:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.298 10:00:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.298 10:00:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.298 10:00:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1541 -- # continue 00:05:46.298 10:00:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.298 10:00:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.298 10:00:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.298 10:00:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:46.298 10:00:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.298 10:00:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.298 10:00:14 -- common/autotest_common.sh@1541 -- # continue 00:05:46.298 10:00:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:46.298 10:00:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.298 10:00:14 -- common/autotest_common.sh@10 -- # set +x 00:05:46.298 10:00:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:46.298 10:00:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.298 10:00:14 -- common/autotest_common.sh@10 -- # set +x 00:05:46.298 10:00:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.128 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.128 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.389 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.389 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.389 10:00:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:47.389 10:00:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.389 10:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:47.389 10:00:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:47.389 10:00:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:47.389 10:00:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:47.389 10:00:15 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:47.389 10:00:15 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:47.389 10:00:15 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:47.389 10:00:15 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:47.389 10:00:15 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:47.389 10:00:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:47.389 10:00:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:47.389 10:00:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.389 10:00:15 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.389 10:00:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:47.389 10:00:15 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:47.389 10:00:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:47.389 10:00:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.389 10:00:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:47.389 10:00:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.389 10:00:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.389 10:00:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.389 10:00:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:47.389 10:00:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.389 10:00:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.389 10:00:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.389 10:00:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:47.389 10:00:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.389 10:00:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.389 10:00:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.389 10:00:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:47.389 10:00:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.389 10:00:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.389 10:00:15 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:47.389 10:00:15 -- common/autotest_common.sh@1570 -- # return 0 00:05:47.389 10:00:15 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:47.389 10:00:15 -- common/autotest_common.sh@1578 -- # return 0 00:05:47.389 10:00:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:47.389 10:00:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:47.389 10:00:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.389 10:00:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.389 10:00:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:47.389 10:00:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.389 10:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:47.389 10:00:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:47.389 10:00:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.389 10:00:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.389 10:00:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.389 10:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:47.389 ************************************ 00:05:47.389 START TEST env 00:05:47.389 ************************************ 00:05:47.389 10:00:15 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.651 * Looking for test storage... 00:05:47.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.651 10:00:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.651 10:00:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.651 10:00:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.651 10:00:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.651 10:00:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.651 10:00:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.651 10:00:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.651 10:00:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.651 10:00:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.651 10:00:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.651 10:00:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.651 10:00:15 env -- scripts/common.sh@344 -- # case "$op" in 00:05:47.651 10:00:15 env -- scripts/common.sh@345 -- # : 1 00:05:47.651 10:00:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.651 10:00:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.651 10:00:15 env -- scripts/common.sh@365 -- # decimal 1 00:05:47.651 10:00:15 env -- scripts/common.sh@353 -- # local d=1 00:05:47.651 10:00:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.651 10:00:15 env -- scripts/common.sh@355 -- # echo 1 00:05:47.651 10:00:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.651 10:00:15 env -- scripts/common.sh@366 -- # decimal 2 00:05:47.651 10:00:15 env -- scripts/common.sh@353 -- # local d=2 00:05:47.651 10:00:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.651 10:00:15 env -- scripts/common.sh@355 -- # echo 2 00:05:47.651 10:00:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.651 10:00:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.651 10:00:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.651 10:00:15 env -- scripts/common.sh@368 -- # return 0 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.651 --rc genhtml_branch_coverage=1 00:05:47.651 --rc genhtml_function_coverage=1 00:05:47.651 --rc genhtml_legend=1 00:05:47.651 --rc geninfo_all_blocks=1 00:05:47.651 --rc geninfo_unexecuted_blocks=1 00:05:47.651 00:05:47.651 ' 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.651 --rc genhtml_branch_coverage=1 00:05:47.651 --rc genhtml_function_coverage=1 00:05:47.651 --rc genhtml_legend=1 00:05:47.651 --rc geninfo_all_blocks=1 00:05:47.651 --rc geninfo_unexecuted_blocks=1 00:05:47.651 00:05:47.651 ' 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.651 --rc genhtml_branch_coverage=1 00:05:47.651 --rc genhtml_function_coverage=1 00:05:47.651 --rc genhtml_legend=1 00:05:47.651 --rc geninfo_all_blocks=1 00:05:47.651 --rc geninfo_unexecuted_blocks=1 00:05:47.651 00:05:47.651 ' 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.651 --rc genhtml_branch_coverage=1 00:05:47.651 --rc genhtml_function_coverage=1 00:05:47.651 --rc genhtml_legend=1 00:05:47.651 --rc geninfo_all_blocks=1 00:05:47.651 --rc geninfo_unexecuted_blocks=1 00:05:47.651 00:05:47.651 ' 00:05:47.651 10:00:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.651 10:00:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.651 10:00:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.651 ************************************ 00:05:47.651 START TEST env_memory 00:05:47.651 ************************************ 00:05:47.651 10:00:15 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.651 00:05:47.651 00:05:47.651 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.651 http://cunit.sourceforge.net/ 00:05:47.651 00:05:47.651 00:05:47.651 Suite: memory 00:05:47.651 Test: alloc and free memory map ...[2024-11-03 10:00:15.971118] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:47.651 passed 00:05:47.651 Test: mem map translation ...[2024-11-03 10:00:16.009737] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:47.651 [2024-11-03 10:00:16.009778] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:47.651 [2024-11-03 10:00:16.009832] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:47.651 [2024-11-03 10:00:16.009846] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:47.912 passed 00:05:47.912 Test: mem map registration ...[2024-11-03 10:00:16.077941] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:47.912 [2024-11-03 10:00:16.077991] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:47.912 passed 00:05:47.912 Test: mem map adjacent registrations ...passed 00:05:47.912 00:05:47.912 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.912 suites 1 1 n/a 0 0 00:05:47.913 tests 4 4 4 0 0 00:05:47.913 asserts 152 152 152 0 n/a 00:05:47.913 00:05:47.913 Elapsed time = 0.233 seconds 00:05:47.913 00:05:47.913 real 0m0.268s 00:05:47.913 user 0m0.238s 00:05:47.913 sys 0m0.022s 00:05:47.913 10:00:16 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.913 10:00:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:47.913 ************************************ 00:05:47.913 END TEST env_memory 00:05:47.913 ************************************ 00:05:47.913 10:00:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.913 10:00:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.913 10:00:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.913 10:00:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.913 ************************************ 00:05:47.913 START TEST env_vtophys 00:05:47.913 ************************************ 00:05:47.913 10:00:16 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:48.172 EAL: lib.eal log level changed from notice to debug 00:05:48.172 EAL: Detected lcore 0 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 1 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 2 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 3 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 4 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 5 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 6 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 7 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 8 as core 0 on socket 0 00:05:48.172 EAL: Detected lcore 9 as core 0 on socket 0 00:05:48.172 EAL: Maximum logical cores by configuration: 128 00:05:48.172 EAL: Detected CPU lcores: 10 00:05:48.172 EAL: Detected NUMA nodes: 1 00:05:48.172 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:48.172 EAL: Detected shared linkage of DPDK 00:05:48.172 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:48.172 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:48.172 EAL: Registered [vdev] bus. 00:05:48.172 EAL: bus.vdev log level changed from disabled to notice 00:05:48.172 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:48.172 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:48.172 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:48.172 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:48.172 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:48.172 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:48.173 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:48.173 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:48.173 EAL: No shared files mode enabled, IPC will be disabled 00:05:48.173 EAL: No shared files mode enabled, IPC is disabled 00:05:48.173 EAL: Selected IOVA mode 'PA' 00:05:48.173 EAL: Probing VFIO support... 00:05:48.173 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.173 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:48.173 EAL: Ask a virtual area of 0x2e000 bytes 00:05:48.173 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:48.173 EAL: Setting up physically contiguous memory... 00:05:48.173 EAL: Setting maximum number of open files to 524288 00:05:48.173 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:48.173 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:48.173 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.173 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:48.173 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.173 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.173 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:48.173 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:48.173 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.173 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:48.173 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.173 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.173 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:48.173 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:48.173 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.173 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:48.173 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.173 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.173 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:48.173 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:48.173 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.173 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:48.173 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.173 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.173 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:48.173 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:48.173 EAL: Hugepages will be freed exactly as allocated. 00:05:48.173 EAL: No shared files mode enabled, IPC is disabled 00:05:48.173 EAL: No shared files mode enabled, IPC is disabled 00:05:48.173 EAL: TSC frequency is ~2600000 KHz 00:05:48.173 EAL: Main lcore 0 is ready (tid=7f88f7856a40;cpuset=[0]) 00:05:48.173 EAL: Trying to obtain current memory policy. 00:05:48.173 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.173 EAL: Restoring previous memory policy: 0 00:05:48.173 EAL: request: mp_malloc_sync 00:05:48.173 EAL: No shared files mode enabled, IPC is disabled 00:05:48.173 EAL: Heap on socket 0 was expanded by 2MB 00:05:48.173 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.173 EAL: No shared files mode enabled, IPC is disabled 00:05:48.173 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:48.173 EAL: Mem event callback 'spdk:(nil)' registered 00:05:48.173 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:48.173 00:05:48.173 00:05:48.173 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.173 http://cunit.sourceforge.net/ 00:05:48.173 00:05:48.173 00:05:48.173 Suite: components_suite 00:05:48.432 Test: vtophys_malloc_test ...passed 00:05:48.432 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:48.432 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.433 EAL: Restoring previous memory policy: 4 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was expanded by 4MB 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was shrunk by 4MB 00:05:48.433 EAL: Trying to obtain current memory policy. 00:05:48.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.433 EAL: Restoring previous memory policy: 4 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was expanded by 6MB 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was shrunk by 6MB 00:05:48.433 EAL: Trying to obtain current memory policy. 00:05:48.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.433 EAL: Restoring previous memory policy: 4 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was expanded by 10MB 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was shrunk by 10MB 00:05:48.433 EAL: Trying to obtain current memory policy. 00:05:48.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.433 EAL: Restoring previous memory policy: 4 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was expanded by 18MB 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was shrunk by 18MB 00:05:48.433 EAL: Trying to obtain current memory policy. 00:05:48.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.433 EAL: Restoring previous memory policy: 4 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was expanded by 34MB 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was shrunk by 34MB 00:05:48.433 EAL: Trying to obtain current memory policy. 00:05:48.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.433 EAL: Restoring previous memory policy: 4 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.433 EAL: request: mp_malloc_sync 00:05:48.433 EAL: No shared files mode enabled, IPC is disabled 00:05:48.433 EAL: Heap on socket 0 was expanded by 66MB 00:05:48.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.691 EAL: request: mp_malloc_sync 00:05:48.691 EAL: No shared files mode enabled, IPC is disabled 00:05:48.691 EAL: Heap on socket 0 was shrunk by 66MB 00:05:48.691 EAL: Trying to obtain current memory policy. 00:05:48.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.691 EAL: Restoring previous memory policy: 4 00:05:48.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.691 EAL: request: mp_malloc_sync 00:05:48.691 EAL: No shared files mode enabled, IPC is disabled 00:05:48.691 EAL: Heap on socket 0 was expanded by 130MB 00:05:48.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.691 EAL: request: mp_malloc_sync 00:05:48.691 EAL: No shared files mode enabled, IPC is disabled 00:05:48.691 EAL: Heap on socket 0 was shrunk by 130MB 00:05:48.691 EAL: Trying to obtain current memory policy. 00:05:48.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.691 EAL: Restoring previous memory policy: 4 00:05:48.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.691 EAL: request: mp_malloc_sync 00:05:48.691 EAL: No shared files mode enabled, IPC is disabled 00:05:48.691 EAL: Heap on socket 0 was expanded by 258MB 00:05:48.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.691 EAL: request: mp_malloc_sync 00:05:48.691 EAL: No shared files mode enabled, IPC is disabled 00:05:48.691 EAL: Heap on socket 0 was shrunk by 258MB 00:05:48.691 EAL: Trying to obtain current memory policy. 00:05:48.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.691 EAL: Restoring previous memory policy: 4 00:05:48.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.691 EAL: request: mp_malloc_sync 00:05:48.691 EAL: No shared files mode enabled, IPC is disabled 00:05:48.691 EAL: Heap on socket 0 was expanded by 514MB 00:05:48.948 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.948 EAL: request: mp_malloc_sync 00:05:48.948 EAL: No shared files mode enabled, IPC is disabled 00:05:48.948 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.948 EAL: Trying to obtain current memory policy. 00:05:48.948 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.948 EAL: Restoring previous memory policy: 4 00:05:48.948 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.948 EAL: request: mp_malloc_sync 00:05:48.948 EAL: No shared files mode enabled, IPC is disabled 00:05:48.948 EAL: Heap on socket 0 was expanded by 1026MB 00:05:49.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.205 passed 00:05:49.205 00:05:49.205 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.205 suites 1 1 n/a 0 0 00:05:49.205 tests 2 2 2 0 0 00:05:49.205 asserts 5190 5190 5190 0 n/a 00:05:49.205 00:05:49.205 Elapsed time = 1.025 seconds 00:05:49.205 EAL: request: mp_malloc_sync 00:05:49.205 EAL: No shared files mode enabled, IPC is disabled 00:05:49.205 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:49.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.205 EAL: request: mp_malloc_sync 00:05:49.205 EAL: No shared files mode enabled, IPC is disabled 00:05:49.205 EAL: Heap on socket 0 was shrunk by 2MB 00:05:49.205 EAL: No shared files mode enabled, IPC is disabled 00:05:49.205 EAL: No shared files mode enabled, IPC is disabled 00:05:49.205 EAL: No shared files mode enabled, IPC is disabled 00:05:49.205 00:05:49.205 real 0m1.238s 00:05:49.205 user 0m0.479s 00:05:49.205 sys 0m0.624s 00:05:49.205 ************************************ 00:05:49.205 END TEST env_vtophys 00:05:49.205 ************************************ 00:05:49.205 10:00:17 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.205 10:00:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:49.205 10:00:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:49.205 10:00:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.205 10:00:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.205 10:00:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.205 ************************************ 00:05:49.205 START TEST env_pci 00:05:49.205 ************************************ 00:05:49.205 10:00:17 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:49.462 00:05:49.462 00:05:49.462 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.462 http://cunit.sourceforge.net/ 00:05:49.462 00:05:49.462 00:05:49.462 Suite: pci 00:05:49.462 Test: pci_hook ...[2024-11-03 10:00:17.583165] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69333 has claimed it 00:05:49.462 passed 00:05:49.462 00:05:49.462 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.462 suites 1 1 n/a 0 0 00:05:49.462 tests 1 1 1 0 0 00:05:49.462 asserts 25 25 25 0 n/a 00:05:49.462 00:05:49.462 Elapsed time = 0.004 seconds 00:05:49.462 EAL: Cannot find device (10000:00:01.0) 00:05:49.462 EAL: Failed to attach device on primary process 00:05:49.462 00:05:49.462 real 0m0.045s 00:05:49.462 user 0m0.015s 00:05:49.462 sys 0m0.029s 00:05:49.462 10:00:17 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.462 10:00:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:49.462 ************************************ 00:05:49.462 END TEST env_pci 00:05:49.462 ************************************ 00:05:49.462 10:00:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:49.462 10:00:17 env -- env/env.sh@15 -- # uname 00:05:49.462 10:00:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:49.462 10:00:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:49.462 10:00:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:49.462 10:00:17 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:49.462 10:00:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.462 10:00:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.462 ************************************ 00:05:49.462 START TEST env_dpdk_post_init 00:05:49.463 ************************************ 00:05:49.463 10:00:17 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:49.463 EAL: Detected CPU lcores: 10 00:05:49.463 EAL: Detected NUMA nodes: 1 00:05:49.463 EAL: Detected shared linkage of DPDK 00:05:49.463 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:49.463 EAL: Selected IOVA mode 'PA' 00:05:49.463 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:49.722 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:49.722 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:49.722 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:49.722 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:49.722 Starting DPDK initialization... 00:05:49.722 Starting SPDK post initialization... 00:05:49.722 SPDK NVMe probe 00:05:49.722 Attaching to 0000:00:10.0 00:05:49.722 Attaching to 0000:00:11.0 00:05:49.722 Attaching to 0000:00:12.0 00:05:49.722 Attaching to 0000:00:13.0 00:05:49.722 Attached to 0000:00:10.0 00:05:49.722 Attached to 0000:00:11.0 00:05:49.722 Attached to 0000:00:13.0 00:05:49.722 Attached to 0000:00:12.0 00:05:49.722 Cleaning up... 00:05:49.722 00:05:49.722 real 0m0.202s 00:05:49.722 user 0m0.047s 00:05:49.722 sys 0m0.058s 00:05:49.722 10:00:17 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.722 10:00:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.722 ************************************ 00:05:49.722 END TEST env_dpdk_post_init 00:05:49.722 ************************************ 00:05:49.722 10:00:17 env -- env/env.sh@26 -- # uname 00:05:49.722 10:00:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:49.722 10:00:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.722 10:00:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.722 10:00:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.722 10:00:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.722 ************************************ 00:05:49.722 START TEST env_mem_callbacks 00:05:49.722 ************************************ 00:05:49.722 10:00:17 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.722 EAL: Detected CPU lcores: 10 00:05:49.722 EAL: Detected NUMA nodes: 1 00:05:49.722 EAL: Detected shared linkage of DPDK 00:05:49.722 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:49.722 EAL: Selected IOVA mode 'PA' 00:05:49.722 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:49.722 00:05:49.722 00:05:49.722 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.722 http://cunit.sourceforge.net/ 00:05:49.722 00:05:49.722 00:05:49.722 Suite: memory 00:05:49.722 Test: test ... 00:05:49.722 register 0x200000200000 2097152 00:05:49.722 malloc 3145728 00:05:49.722 register 0x200000400000 4194304 00:05:49.722 buf 0x200000500000 len 3145728 PASSED 00:05:49.722 malloc 64 00:05:49.722 buf 0x2000004fff40 len 64 PASSED 00:05:49.722 malloc 4194304 00:05:49.722 register 0x200000800000 6291456 00:05:49.722 buf 0x200000a00000 len 4194304 PASSED 00:05:49.722 free 0x200000500000 3145728 00:05:49.722 free 0x2000004fff40 64 00:05:49.722 unregister 0x200000400000 4194304 PASSED 00:05:49.722 free 0x200000a00000 4194304 00:05:49.722 unregister 0x200000800000 6291456 PASSED 00:05:49.722 malloc 8388608 00:05:49.722 register 0x200000400000 10485760 00:05:49.722 buf 0x200000600000 len 8388608 PASSED 00:05:49.722 free 0x200000600000 8388608 00:05:49.722 unregister 0x200000400000 10485760 PASSED 00:05:49.722 passed 00:05:49.722 00:05:49.722 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.722 suites 1 1 n/a 0 0 00:05:49.722 tests 1 1 1 0 0 00:05:49.722 asserts 15 15 15 0 n/a 00:05:49.722 00:05:49.722 Elapsed time = 0.010 seconds 00:05:49.984 00:05:49.984 real 0m0.154s 00:05:49.984 user 0m0.022s 00:05:49.984 sys 0m0.030s 00:05:49.984 10:00:18 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.984 ************************************ 00:05:49.984 END TEST env_mem_callbacks 00:05:49.984 ************************************ 00:05:49.984 10:00:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:49.984 00:05:49.984 real 0m2.390s 00:05:49.984 user 0m0.948s 00:05:49.984 sys 0m0.989s 00:05:49.984 10:00:18 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.984 ************************************ 00:05:49.984 END TEST env 00:05:49.984 ************************************ 00:05:49.984 10:00:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.984 10:00:18 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:49.984 10:00:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.984 10:00:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.984 10:00:18 -- common/autotest_common.sh@10 -- # set +x 00:05:49.984 ************************************ 00:05:49.984 START TEST rpc 00:05:49.984 ************************************ 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:49.984 * Looking for test storage... 00:05:49.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.984 10:00:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.984 10:00:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.984 10:00:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.984 10:00:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.984 10:00:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.984 10:00:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.984 10:00:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.984 10:00:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.984 10:00:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.984 10:00:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.984 10:00:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.984 10:00:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.984 10:00:18 rpc -- scripts/common.sh@345 -- # : 1 00:05:49.984 10:00:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.984 10:00:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.984 10:00:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.984 10:00:18 rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.984 10:00:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.984 10:00:18 rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.984 10:00:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.984 10:00:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.984 10:00:18 rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.984 10:00:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.984 10:00:18 rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.984 10:00:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.984 10:00:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.984 10:00:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.984 10:00:18 rpc -- scripts/common.sh@368 -- # return 0 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.984 --rc genhtml_branch_coverage=1 00:05:49.984 --rc genhtml_function_coverage=1 00:05:49.984 --rc genhtml_legend=1 00:05:49.984 --rc geninfo_all_blocks=1 00:05:49.984 --rc geninfo_unexecuted_blocks=1 00:05:49.984 00:05:49.984 ' 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.984 --rc genhtml_branch_coverage=1 00:05:49.984 --rc genhtml_function_coverage=1 00:05:49.984 --rc genhtml_legend=1 00:05:49.984 --rc geninfo_all_blocks=1 00:05:49.984 --rc geninfo_unexecuted_blocks=1 00:05:49.984 00:05:49.984 ' 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.984 --rc genhtml_branch_coverage=1 00:05:49.984 --rc genhtml_function_coverage=1 00:05:49.984 --rc genhtml_legend=1 00:05:49.984 --rc geninfo_all_blocks=1 00:05:49.984 --rc geninfo_unexecuted_blocks=1 00:05:49.984 00:05:49.984 ' 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.984 --rc genhtml_branch_coverage=1 00:05:49.984 --rc genhtml_function_coverage=1 00:05:49.984 --rc genhtml_legend=1 00:05:49.984 --rc geninfo_all_blocks=1 00:05:49.984 --rc geninfo_unexecuted_blocks=1 00:05:49.984 00:05:49.984 ' 00:05:49.984 10:00:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69460 00:05:49.984 10:00:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.984 10:00:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69460 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@831 -- # '[' -z 69460 ']' 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.984 10:00:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.984 10:00:18 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:50.245 [2024-11-03 10:00:18.415366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:50.245 [2024-11-03 10:00:18.415525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69460 ] 00:05:50.245 [2024-11-03 10:00:18.552409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.245 [2024-11-03 10:00:18.603243] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:50.245 [2024-11-03 10:00:18.603299] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69460' to capture a snapshot of events at runtime. 00:05:50.245 [2024-11-03 10:00:18.603318] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.245 [2024-11-03 10:00:18.603330] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.245 [2024-11-03 10:00:18.603343] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69460 for offline analysis/debug. 00:05:50.245 [2024-11-03 10:00:18.603383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.188 10:00:19 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.188 10:00:19 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.188 10:00:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.188 10:00:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.188 10:00:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:51.188 10:00:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:51.188 10:00:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.188 10:00:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.188 10:00:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 ************************************ 00:05:51.188 START TEST rpc_integrity 00:05:51.188 ************************************ 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.188 { 00:05:51.188 "name": "Malloc0", 00:05:51.188 "aliases": [ 00:05:51.188 "ae35c81d-b5e6-4607-a49a-58ac1795b010" 00:05:51.188 ], 00:05:51.188 "product_name": "Malloc disk", 00:05:51.188 "block_size": 512, 00:05:51.188 "num_blocks": 16384, 00:05:51.188 "uuid": "ae35c81d-b5e6-4607-a49a-58ac1795b010", 00:05:51.188 "assigned_rate_limits": { 00:05:51.188 "rw_ios_per_sec": 0, 00:05:51.188 "rw_mbytes_per_sec": 0, 00:05:51.188 "r_mbytes_per_sec": 0, 00:05:51.188 "w_mbytes_per_sec": 0 00:05:51.188 }, 00:05:51.188 "claimed": false, 00:05:51.188 "zoned": false, 00:05:51.188 "supported_io_types": { 00:05:51.188 "read": true, 00:05:51.188 "write": true, 00:05:51.188 "unmap": true, 00:05:51.188 "flush": true, 00:05:51.188 "reset": true, 00:05:51.188 "nvme_admin": false, 00:05:51.188 "nvme_io": false, 00:05:51.188 "nvme_io_md": false, 00:05:51.188 "write_zeroes": true, 00:05:51.188 "zcopy": true, 00:05:51.188 "get_zone_info": false, 00:05:51.188 "zone_management": false, 00:05:51.188 "zone_append": false, 00:05:51.188 "compare": false, 00:05:51.188 "compare_and_write": false, 00:05:51.188 "abort": true, 00:05:51.188 "seek_hole": false, 00:05:51.188 "seek_data": false, 00:05:51.188 "copy": true, 00:05:51.188 "nvme_iov_md": false 00:05:51.188 }, 00:05:51.188 "memory_domains": [ 00:05:51.188 { 00:05:51.188 "dma_device_id": "system", 00:05:51.188 "dma_device_type": 1 00:05:51.188 }, 00:05:51.188 { 00:05:51.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.188 "dma_device_type": 2 00:05:51.188 } 00:05:51.188 ], 00:05:51.188 "driver_specific": {} 00:05:51.188 } 00:05:51.188 ]' 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 [2024-11-03 10:00:19.384057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:51.188 [2024-11-03 10:00:19.384127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.188 [2024-11-03 10:00:19.384155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:51.188 [2024-11-03 10:00:19.384166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.188 [2024-11-03 10:00:19.386708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.188 [2024-11-03 10:00:19.386757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.188 Passthru0 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.188 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.188 { 00:05:51.188 "name": "Malloc0", 00:05:51.188 "aliases": [ 00:05:51.188 "ae35c81d-b5e6-4607-a49a-58ac1795b010" 00:05:51.188 ], 00:05:51.188 "product_name": "Malloc disk", 00:05:51.188 "block_size": 512, 00:05:51.188 "num_blocks": 16384, 00:05:51.188 "uuid": "ae35c81d-b5e6-4607-a49a-58ac1795b010", 00:05:51.188 "assigned_rate_limits": { 00:05:51.188 "rw_ios_per_sec": 0, 00:05:51.188 "rw_mbytes_per_sec": 0, 00:05:51.188 "r_mbytes_per_sec": 0, 00:05:51.188 "w_mbytes_per_sec": 0 00:05:51.188 }, 00:05:51.188 "claimed": true, 00:05:51.188 "claim_type": "exclusive_write", 00:05:51.188 "zoned": false, 00:05:51.188 "supported_io_types": { 00:05:51.188 "read": true, 00:05:51.188 "write": true, 00:05:51.188 "unmap": true, 00:05:51.188 "flush": true, 00:05:51.188 "reset": true, 00:05:51.188 "nvme_admin": false, 00:05:51.188 "nvme_io": false, 00:05:51.188 "nvme_io_md": false, 00:05:51.188 "write_zeroes": true, 00:05:51.188 "zcopy": true, 00:05:51.188 "get_zone_info": false, 00:05:51.188 "zone_management": false, 00:05:51.188 "zone_append": false, 00:05:51.188 "compare": false, 00:05:51.188 "compare_and_write": false, 00:05:51.188 "abort": true, 00:05:51.188 "seek_hole": false, 00:05:51.188 "seek_data": false, 00:05:51.188 "copy": true, 00:05:51.188 "nvme_iov_md": false 00:05:51.188 }, 00:05:51.188 "memory_domains": [ 00:05:51.188 { 00:05:51.188 "dma_device_id": "system", 00:05:51.188 "dma_device_type": 1 00:05:51.188 }, 00:05:51.188 { 00:05:51.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.188 "dma_device_type": 2 00:05:51.188 } 00:05:51.188 ], 00:05:51.188 "driver_specific": {} 00:05:51.188 }, 00:05:51.188 { 00:05:51.188 "name": "Passthru0", 00:05:51.188 "aliases": [ 00:05:51.188 "b6b7b6ae-cc0c-58db-8a03-995609958377" 00:05:51.189 ], 00:05:51.189 "product_name": "passthru", 00:05:51.189 "block_size": 512, 00:05:51.189 "num_blocks": 16384, 00:05:51.189 "uuid": "b6b7b6ae-cc0c-58db-8a03-995609958377", 00:05:51.189 "assigned_rate_limits": { 00:05:51.189 "rw_ios_per_sec": 0, 00:05:51.189 "rw_mbytes_per_sec": 0, 00:05:51.189 "r_mbytes_per_sec": 0, 00:05:51.189 "w_mbytes_per_sec": 0 00:05:51.189 }, 00:05:51.189 "claimed": false, 00:05:51.189 "zoned": false, 00:05:51.189 "supported_io_types": { 00:05:51.189 "read": true, 00:05:51.189 "write": true, 00:05:51.189 "unmap": true, 00:05:51.189 "flush": true, 00:05:51.189 "reset": true, 00:05:51.189 "nvme_admin": false, 00:05:51.189 "nvme_io": false, 00:05:51.189 "nvme_io_md": false, 00:05:51.189 "write_zeroes": true, 00:05:51.189 "zcopy": true, 00:05:51.189 "get_zone_info": false, 00:05:51.189 "zone_management": false, 00:05:51.189 "zone_append": false, 00:05:51.189 "compare": false, 00:05:51.189 "compare_and_write": false, 00:05:51.189 "abort": true, 00:05:51.189 "seek_hole": false, 00:05:51.189 "seek_data": false, 00:05:51.189 "copy": true, 00:05:51.189 "nvme_iov_md": false 00:05:51.189 }, 00:05:51.189 "memory_domains": [ 00:05:51.189 { 00:05:51.189 "dma_device_id": "system", 00:05:51.189 "dma_device_type": 1 00:05:51.189 }, 00:05:51.189 { 00:05:51.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.189 "dma_device_type": 2 00:05:51.189 } 00:05:51.189 ], 00:05:51.189 "driver_specific": { 00:05:51.189 "passthru": { 00:05:51.189 "name": "Passthru0", 00:05:51.189 "base_bdev_name": "Malloc0" 00:05:51.189 } 00:05:51.189 } 00:05:51.189 } 00:05:51.189 ]' 00:05:51.189 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.189 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.189 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.189 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.189 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.189 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.189 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.189 10:00:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.189 00:05:51.189 real 0m0.231s 00:05:51.189 user 0m0.132s 00:05:51.189 sys 0m0.035s 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.189 ************************************ 00:05:51.189 END TEST rpc_integrity 00:05:51.189 ************************************ 00:05:51.189 10:00:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.448 10:00:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:51.448 10:00:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.448 10:00:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.448 10:00:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.448 ************************************ 00:05:51.448 START TEST rpc_plugins 00:05:51.448 ************************************ 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:51.448 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.448 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:51.448 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.448 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:51.448 { 00:05:51.448 "name": "Malloc1", 00:05:51.448 "aliases": [ 00:05:51.448 "f240d81c-1b25-4c27-93d1-c376c3682399" 00:05:51.448 ], 00:05:51.448 "product_name": "Malloc disk", 00:05:51.448 "block_size": 4096, 00:05:51.448 "num_blocks": 256, 00:05:51.448 "uuid": "f240d81c-1b25-4c27-93d1-c376c3682399", 00:05:51.448 "assigned_rate_limits": { 00:05:51.448 "rw_ios_per_sec": 0, 00:05:51.448 "rw_mbytes_per_sec": 0, 00:05:51.448 "r_mbytes_per_sec": 0, 00:05:51.448 "w_mbytes_per_sec": 0 00:05:51.448 }, 00:05:51.448 "claimed": false, 00:05:51.448 "zoned": false, 00:05:51.448 "supported_io_types": { 00:05:51.448 "read": true, 00:05:51.448 "write": true, 00:05:51.448 "unmap": true, 00:05:51.448 "flush": true, 00:05:51.448 "reset": true, 00:05:51.448 "nvme_admin": false, 00:05:51.448 "nvme_io": false, 00:05:51.448 "nvme_io_md": false, 00:05:51.448 "write_zeroes": true, 00:05:51.448 "zcopy": true, 00:05:51.448 "get_zone_info": false, 00:05:51.448 "zone_management": false, 00:05:51.448 "zone_append": false, 00:05:51.448 "compare": false, 00:05:51.448 "compare_and_write": false, 00:05:51.448 "abort": true, 00:05:51.448 "seek_hole": false, 00:05:51.448 "seek_data": false, 00:05:51.448 "copy": true, 00:05:51.448 "nvme_iov_md": false 00:05:51.448 }, 00:05:51.448 "memory_domains": [ 00:05:51.448 { 00:05:51.448 "dma_device_id": "system", 00:05:51.448 "dma_device_type": 1 00:05:51.448 }, 00:05:51.448 { 00:05:51.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.448 "dma_device_type": 2 00:05:51.448 } 00:05:51.448 ], 00:05:51.448 "driver_specific": {} 00:05:51.448 } 00:05:51.448 ]' 00:05:51.448 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:51.448 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:51.448 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.448 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.449 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:51.449 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.449 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.449 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.449 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:51.449 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:51.449 10:00:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:51.449 00:05:51.449 real 0m0.123s 00:05:51.449 user 0m0.066s 00:05:51.449 sys 0m0.017s 00:05:51.449 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.449 10:00:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.449 ************************************ 00:05:51.449 END TEST rpc_plugins 00:05:51.449 ************************************ 00:05:51.449 10:00:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:51.449 10:00:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.449 10:00:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.449 10:00:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.449 ************************************ 00:05:51.449 START TEST rpc_trace_cmd_test 00:05:51.449 ************************************ 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:51.449 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69460", 00:05:51.449 "tpoint_group_mask": "0x8", 00:05:51.449 "iscsi_conn": { 00:05:51.449 "mask": "0x2", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "scsi": { 00:05:51.449 "mask": "0x4", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "bdev": { 00:05:51.449 "mask": "0x8", 00:05:51.449 "tpoint_mask": "0xffffffffffffffff" 00:05:51.449 }, 00:05:51.449 "nvmf_rdma": { 00:05:51.449 "mask": "0x10", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "nvmf_tcp": { 00:05:51.449 "mask": "0x20", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "ftl": { 00:05:51.449 "mask": "0x40", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "blobfs": { 00:05:51.449 "mask": "0x80", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "dsa": { 00:05:51.449 "mask": "0x200", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "thread": { 00:05:51.449 "mask": "0x400", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "nvme_pcie": { 00:05:51.449 "mask": "0x800", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "iaa": { 00:05:51.449 "mask": "0x1000", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "nvme_tcp": { 00:05:51.449 "mask": "0x2000", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "bdev_nvme": { 00:05:51.449 "mask": "0x4000", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "sock": { 00:05:51.449 "mask": "0x8000", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "blob": { 00:05:51.449 "mask": "0x10000", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 }, 00:05:51.449 "bdev_raid": { 00:05:51.449 "mask": "0x20000", 00:05:51.449 "tpoint_mask": "0x0" 00:05:51.449 } 00:05:51.449 }' 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:51.449 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:51.709 00:05:51.709 real 0m0.179s 00:05:51.709 user 0m0.146s 00:05:51.709 sys 0m0.023s 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.709 10:00:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.709 ************************************ 00:05:51.709 END TEST rpc_trace_cmd_test 00:05:51.709 ************************************ 00:05:51.709 10:00:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:51.709 10:00:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:51.709 10:00:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:51.709 10:00:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.709 10:00:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.709 10:00:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.709 ************************************ 00:05:51.709 START TEST rpc_daemon_integrity 00:05:51.709 ************************************ 00:05:51.709 10:00:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:51.709 10:00:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.709 10:00:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.709 10:00:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.709 10:00:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.709 { 00:05:51.709 "name": "Malloc2", 00:05:51.709 "aliases": [ 00:05:51.709 "4b5ae22a-1ab2-404a-b79d-d86082c7c5ed" 00:05:51.709 ], 00:05:51.709 "product_name": "Malloc disk", 00:05:51.709 "block_size": 512, 00:05:51.709 "num_blocks": 16384, 00:05:51.709 "uuid": "4b5ae22a-1ab2-404a-b79d-d86082c7c5ed", 00:05:51.709 "assigned_rate_limits": { 00:05:51.709 "rw_ios_per_sec": 0, 00:05:51.709 "rw_mbytes_per_sec": 0, 00:05:51.709 "r_mbytes_per_sec": 0, 00:05:51.709 "w_mbytes_per_sec": 0 00:05:51.709 }, 00:05:51.709 "claimed": false, 00:05:51.709 "zoned": false, 00:05:51.709 "supported_io_types": { 00:05:51.709 "read": true, 00:05:51.709 "write": true, 00:05:51.709 "unmap": true, 00:05:51.709 "flush": true, 00:05:51.709 "reset": true, 00:05:51.709 "nvme_admin": false, 00:05:51.709 "nvme_io": false, 00:05:51.709 "nvme_io_md": false, 00:05:51.709 "write_zeroes": true, 00:05:51.709 "zcopy": true, 00:05:51.709 "get_zone_info": false, 00:05:51.709 "zone_management": false, 00:05:51.709 "zone_append": false, 00:05:51.709 "compare": false, 00:05:51.709 "compare_and_write": false, 00:05:51.709 "abort": true, 00:05:51.709 "seek_hole": false, 00:05:51.709 "seek_data": false, 00:05:51.709 "copy": true, 00:05:51.709 "nvme_iov_md": false 00:05:51.709 }, 00:05:51.709 "memory_domains": [ 00:05:51.709 { 00:05:51.709 "dma_device_id": "system", 00:05:51.709 "dma_device_type": 1 00:05:51.709 }, 00:05:51.709 { 00:05:51.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.709 "dma_device_type": 2 00:05:51.709 } 00:05:51.709 ], 00:05:51.709 "driver_specific": {} 00:05:51.709 } 00:05:51.709 ]' 00:05:51.709 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 [2024-11-03 10:00:20.101461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:51.971 [2024-11-03 10:00:20.101530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.971 [2024-11-03 10:00:20.101557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:51.971 [2024-11-03 10:00:20.101567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.971 [2024-11-03 10:00:20.104148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.971 [2024-11-03 10:00:20.104196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.971 Passthru0 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.971 { 00:05:51.971 "name": "Malloc2", 00:05:51.971 "aliases": [ 00:05:51.971 "4b5ae22a-1ab2-404a-b79d-d86082c7c5ed" 00:05:51.971 ], 00:05:51.971 "product_name": "Malloc disk", 00:05:51.971 "block_size": 512, 00:05:51.971 "num_blocks": 16384, 00:05:51.971 "uuid": "4b5ae22a-1ab2-404a-b79d-d86082c7c5ed", 00:05:51.971 "assigned_rate_limits": { 00:05:51.971 "rw_ios_per_sec": 0, 00:05:51.971 "rw_mbytes_per_sec": 0, 00:05:51.971 "r_mbytes_per_sec": 0, 00:05:51.971 "w_mbytes_per_sec": 0 00:05:51.971 }, 00:05:51.971 "claimed": true, 00:05:51.971 "claim_type": "exclusive_write", 00:05:51.971 "zoned": false, 00:05:51.971 "supported_io_types": { 00:05:51.971 "read": true, 00:05:51.971 "write": true, 00:05:51.971 "unmap": true, 00:05:51.971 "flush": true, 00:05:51.971 "reset": true, 00:05:51.971 "nvme_admin": false, 00:05:51.971 "nvme_io": false, 00:05:51.971 "nvme_io_md": false, 00:05:51.971 "write_zeroes": true, 00:05:51.971 "zcopy": true, 00:05:51.971 "get_zone_info": false, 00:05:51.971 "zone_management": false, 00:05:51.971 "zone_append": false, 00:05:51.971 "compare": false, 00:05:51.971 "compare_and_write": false, 00:05:51.971 "abort": true, 00:05:51.971 "seek_hole": false, 00:05:51.971 "seek_data": false, 00:05:51.971 "copy": true, 00:05:51.971 "nvme_iov_md": false 00:05:51.971 }, 00:05:51.971 "memory_domains": [ 00:05:51.971 { 00:05:51.971 "dma_device_id": "system", 00:05:51.971 "dma_device_type": 1 00:05:51.971 }, 00:05:51.971 { 00:05:51.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.971 "dma_device_type": 2 00:05:51.971 } 00:05:51.971 ], 00:05:51.971 "driver_specific": {} 00:05:51.971 }, 00:05:51.971 { 00:05:51.971 "name": "Passthru0", 00:05:51.971 "aliases": [ 00:05:51.971 "308fbcc4-ac5f-58ac-a247-1096ebe976e2" 00:05:51.971 ], 00:05:51.971 "product_name": "passthru", 00:05:51.971 "block_size": 512, 00:05:51.971 "num_blocks": 16384, 00:05:51.971 "uuid": "308fbcc4-ac5f-58ac-a247-1096ebe976e2", 00:05:51.971 "assigned_rate_limits": { 00:05:51.971 "rw_ios_per_sec": 0, 00:05:51.971 "rw_mbytes_per_sec": 0, 00:05:51.971 "r_mbytes_per_sec": 0, 00:05:51.971 "w_mbytes_per_sec": 0 00:05:51.971 }, 00:05:51.971 "claimed": false, 00:05:51.971 "zoned": false, 00:05:51.971 "supported_io_types": { 00:05:51.971 "read": true, 00:05:51.971 "write": true, 00:05:51.971 "unmap": true, 00:05:51.971 "flush": true, 00:05:51.971 "reset": true, 00:05:51.971 "nvme_admin": false, 00:05:51.971 "nvme_io": false, 00:05:51.971 "nvme_io_md": false, 00:05:51.971 "write_zeroes": true, 00:05:51.971 "zcopy": true, 00:05:51.971 "get_zone_info": false, 00:05:51.971 "zone_management": false, 00:05:51.971 "zone_append": false, 00:05:51.971 "compare": false, 00:05:51.971 "compare_and_write": false, 00:05:51.971 "abort": true, 00:05:51.971 "seek_hole": false, 00:05:51.971 "seek_data": false, 00:05:51.971 "copy": true, 00:05:51.971 "nvme_iov_md": false 00:05:51.971 }, 00:05:51.971 "memory_domains": [ 00:05:51.971 { 00:05:51.971 "dma_device_id": "system", 00:05:51.971 "dma_device_type": 1 00:05:51.971 }, 00:05:51.971 { 00:05:51.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.971 "dma_device_type": 2 00:05:51.971 } 00:05:51.971 ], 00:05:51.971 "driver_specific": { 00:05:51.971 "passthru": { 00:05:51.971 "name": "Passthru0", 00:05:51.971 "base_bdev_name": "Malloc2" 00:05:51.971 } 00:05:51.971 } 00:05:51.971 } 00:05:51.971 ]' 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.971 00:05:51.971 real 0m0.226s 00:05:51.971 user 0m0.123s 00:05:51.971 sys 0m0.036s 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.971 10:00:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 ************************************ 00:05:51.971 END TEST rpc_daemon_integrity 00:05:51.971 ************************************ 00:05:51.971 10:00:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:51.971 10:00:20 rpc -- rpc/rpc.sh@84 -- # killprocess 69460 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@950 -- # '[' -z 69460 ']' 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@954 -- # kill -0 69460 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@955 -- # uname 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69460 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.972 killing process with pid 69460 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69460' 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@969 -- # kill 69460 00:05:51.972 10:00:20 rpc -- common/autotest_common.sh@974 -- # wait 69460 00:05:52.540 00:05:52.540 real 0m2.424s 00:05:52.540 user 0m2.816s 00:05:52.540 sys 0m0.662s 00:05:52.540 10:00:20 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.540 ************************************ 00:05:52.540 END TEST rpc 00:05:52.540 ************************************ 00:05:52.540 10:00:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.540 10:00:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:52.540 10:00:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.540 10:00:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.540 10:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:52.540 ************************************ 00:05:52.540 START TEST skip_rpc 00:05:52.540 ************************************ 00:05:52.540 10:00:20 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:52.540 * Looking for test storage... 00:05:52.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.540 10:00:20 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.540 10:00:20 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.540 10:00:20 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.540 10:00:20 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.540 10:00:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:52.540 10:00:20 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.540 10:00:20 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.540 --rc genhtml_branch_coverage=1 00:05:52.540 --rc genhtml_function_coverage=1 00:05:52.540 --rc genhtml_legend=1 00:05:52.540 --rc geninfo_all_blocks=1 00:05:52.540 --rc geninfo_unexecuted_blocks=1 00:05:52.540 00:05:52.540 ' 00:05:52.540 10:00:20 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.540 --rc genhtml_branch_coverage=1 00:05:52.540 --rc genhtml_function_coverage=1 00:05:52.541 --rc genhtml_legend=1 00:05:52.541 --rc geninfo_all_blocks=1 00:05:52.541 --rc geninfo_unexecuted_blocks=1 00:05:52.541 00:05:52.541 ' 00:05:52.541 10:00:20 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.541 --rc genhtml_branch_coverage=1 00:05:52.541 --rc genhtml_function_coverage=1 00:05:52.541 --rc genhtml_legend=1 00:05:52.541 --rc geninfo_all_blocks=1 00:05:52.541 --rc geninfo_unexecuted_blocks=1 00:05:52.541 00:05:52.541 ' 00:05:52.541 10:00:20 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.541 --rc genhtml_branch_coverage=1 00:05:52.541 --rc genhtml_function_coverage=1 00:05:52.541 --rc genhtml_legend=1 00:05:52.541 --rc geninfo_all_blocks=1 00:05:52.541 --rc geninfo_unexecuted_blocks=1 00:05:52.541 00:05:52.541 ' 00:05:52.541 10:00:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:52.541 10:00:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:52.541 10:00:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:52.541 10:00:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.541 10:00:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.541 10:00:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.541 ************************************ 00:05:52.541 START TEST skip_rpc 00:05:52.541 ************************************ 00:05:52.541 10:00:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:52.541 10:00:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69662 00:05:52.541 10:00:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.541 10:00:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:52.541 10:00:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:52.798 [2024-11-03 10:00:20.924504] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:52.798 [2024-11-03 10:00:20.924657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69662 ] 00:05:52.798 [2024-11-03 10:00:21.058547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.798 [2024-11-03 10:00:21.090578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69662 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69662 ']' 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69662 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69662 00:05:58.088 killing process with pid 69662 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69662' 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69662 00:05:58.088 10:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69662 00:05:58.088 00:05:58.088 real 0m5.263s 00:05:58.088 user 0m4.927s 00:05:58.088 sys 0m0.230s 00:05:58.088 ************************************ 00:05:58.088 END TEST skip_rpc 00:05:58.088 ************************************ 00:05:58.088 10:00:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.088 10:00:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.088 10:00:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:58.088 10:00:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.088 10:00:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.088 10:00:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.088 ************************************ 00:05:58.088 START TEST skip_rpc_with_json 00:05:58.088 ************************************ 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:58.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69748 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69748 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69748 ']' 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.088 10:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.088 [2024-11-03 10:00:26.217267] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:58.089 [2024-11-03 10:00:26.217364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69748 ] 00:05:58.089 [2024-11-03 10:00:26.346884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.089 [2024-11-03 10:00:26.379336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.033 [2024-11-03 10:00:27.060148] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:59.033 request: 00:05:59.033 { 00:05:59.033 "trtype": "tcp", 00:05:59.033 "method": "nvmf_get_transports", 00:05:59.033 "req_id": 1 00:05:59.033 } 00:05:59.033 Got JSON-RPC error response 00:05:59.033 response: 00:05:59.033 { 00:05:59.033 "code": -19, 00:05:59.033 "message": "No such device" 00:05:59.033 } 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.033 [2024-11-03 10:00:27.072254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.033 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:59.033 { 00:05:59.033 "subsystems": [ 00:05:59.033 { 00:05:59.033 "subsystem": "fsdev", 00:05:59.033 "config": [ 00:05:59.033 { 00:05:59.033 "method": "fsdev_set_opts", 00:05:59.033 "params": { 00:05:59.033 "fsdev_io_pool_size": 65535, 00:05:59.033 "fsdev_io_cache_size": 256 00:05:59.033 } 00:05:59.033 } 00:05:59.033 ] 00:05:59.033 }, 00:05:59.033 { 00:05:59.033 "subsystem": "keyring", 00:05:59.033 "config": [] 00:05:59.033 }, 00:05:59.033 { 00:05:59.033 "subsystem": "iobuf", 00:05:59.033 "config": [ 00:05:59.033 { 00:05:59.033 "method": "iobuf_set_options", 00:05:59.033 "params": { 00:05:59.033 "small_pool_count": 8192, 00:05:59.033 "large_pool_count": 1024, 00:05:59.033 "small_bufsize": 8192, 00:05:59.033 "large_bufsize": 135168 00:05:59.033 } 00:05:59.033 } 00:05:59.033 ] 00:05:59.033 }, 00:05:59.033 { 00:05:59.033 "subsystem": "sock", 00:05:59.033 "config": [ 00:05:59.033 { 00:05:59.033 "method": "sock_set_default_impl", 00:05:59.033 "params": { 00:05:59.033 "impl_name": "posix" 00:05:59.033 } 00:05:59.033 }, 00:05:59.033 { 00:05:59.033 "method": "sock_impl_set_options", 00:05:59.033 "params": { 00:05:59.033 "impl_name": "ssl", 00:05:59.033 "recv_buf_size": 4096, 00:05:59.033 "send_buf_size": 4096, 00:05:59.033 "enable_recv_pipe": true, 00:05:59.033 "enable_quickack": false, 00:05:59.033 "enable_placement_id": 0, 00:05:59.033 "enable_zerocopy_send_server": true, 00:05:59.033 "enable_zerocopy_send_client": false, 00:05:59.033 "zerocopy_threshold": 0, 00:05:59.033 "tls_version": 0, 00:05:59.033 "enable_ktls": false 00:05:59.033 } 00:05:59.033 }, 00:05:59.033 { 00:05:59.033 "method": "sock_impl_set_options", 00:05:59.033 "params": { 00:05:59.033 "impl_name": "posix", 00:05:59.033 "recv_buf_size": 2097152, 00:05:59.033 "send_buf_size": 2097152, 00:05:59.033 "enable_recv_pipe": true, 00:05:59.033 "enable_quickack": false, 00:05:59.033 "enable_placement_id": 0, 00:05:59.033 "enable_zerocopy_send_server": true, 00:05:59.033 "enable_zerocopy_send_client": false, 00:05:59.033 "zerocopy_threshold": 0, 00:05:59.033 "tls_version": 0, 00:05:59.033 "enable_ktls": false 00:05:59.033 } 00:05:59.033 } 00:05:59.033 ] 00:05:59.033 }, 00:05:59.033 { 00:05:59.033 "subsystem": "vmd", 00:05:59.033 "config": [] 00:05:59.033 }, 00:05:59.033 { 00:05:59.033 "subsystem": "accel", 00:05:59.033 "config": [ 00:05:59.033 { 00:05:59.034 "method": "accel_set_options", 00:05:59.034 "params": { 00:05:59.034 "small_cache_size": 128, 00:05:59.034 "large_cache_size": 16, 00:05:59.034 "task_count": 2048, 00:05:59.034 "sequence_count": 2048, 00:05:59.034 "buf_count": 2048 00:05:59.034 } 00:05:59.034 } 00:05:59.034 ] 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "bdev", 00:05:59.034 "config": [ 00:05:59.034 { 00:05:59.034 "method": "bdev_set_options", 00:05:59.034 "params": { 00:05:59.034 "bdev_io_pool_size": 65535, 00:05:59.034 "bdev_io_cache_size": 256, 00:05:59.034 "bdev_auto_examine": true, 00:05:59.034 "iobuf_small_cache_size": 128, 00:05:59.034 "iobuf_large_cache_size": 16 00:05:59.034 } 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "method": "bdev_raid_set_options", 00:05:59.034 "params": { 00:05:59.034 "process_window_size_kb": 1024, 00:05:59.034 "process_max_bandwidth_mb_sec": 0 00:05:59.034 } 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "method": "bdev_iscsi_set_options", 00:05:59.034 "params": { 00:05:59.034 "timeout_sec": 30 00:05:59.034 } 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "method": "bdev_nvme_set_options", 00:05:59.034 "params": { 00:05:59.034 "action_on_timeout": "none", 00:05:59.034 "timeout_us": 0, 00:05:59.034 "timeout_admin_us": 0, 00:05:59.034 "keep_alive_timeout_ms": 10000, 00:05:59.034 "arbitration_burst": 0, 00:05:59.034 "low_priority_weight": 0, 00:05:59.034 "medium_priority_weight": 0, 00:05:59.034 "high_priority_weight": 0, 00:05:59.034 "nvme_adminq_poll_period_us": 10000, 00:05:59.034 "nvme_ioq_poll_period_us": 0, 00:05:59.034 "io_queue_requests": 0, 00:05:59.034 "delay_cmd_submit": true, 00:05:59.034 "transport_retry_count": 4, 00:05:59.034 "bdev_retry_count": 3, 00:05:59.034 "transport_ack_timeout": 0, 00:05:59.034 "ctrlr_loss_timeout_sec": 0, 00:05:59.034 "reconnect_delay_sec": 0, 00:05:59.034 "fast_io_fail_timeout_sec": 0, 00:05:59.034 "disable_auto_failback": false, 00:05:59.034 "generate_uuids": false, 00:05:59.034 "transport_tos": 0, 00:05:59.034 "nvme_error_stat": false, 00:05:59.034 "rdma_srq_size": 0, 00:05:59.034 "io_path_stat": false, 00:05:59.034 "allow_accel_sequence": false, 00:05:59.034 "rdma_max_cq_size": 0, 00:05:59.034 "rdma_cm_event_timeout_ms": 0, 00:05:59.034 "dhchap_digests": [ 00:05:59.034 "sha256", 00:05:59.034 "sha384", 00:05:59.034 "sha512" 00:05:59.034 ], 00:05:59.034 "dhchap_dhgroups": [ 00:05:59.034 "null", 00:05:59.034 "ffdhe2048", 00:05:59.034 "ffdhe3072", 00:05:59.034 "ffdhe4096", 00:05:59.034 "ffdhe6144", 00:05:59.034 "ffdhe8192" 00:05:59.034 ] 00:05:59.034 } 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "method": "bdev_nvme_set_hotplug", 00:05:59.034 "params": { 00:05:59.034 "period_us": 100000, 00:05:59.034 "enable": false 00:05:59.034 } 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "method": "bdev_wait_for_examine" 00:05:59.034 } 00:05:59.034 ] 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "scsi", 00:05:59.034 "config": null 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "scheduler", 00:05:59.034 "config": [ 00:05:59.034 { 00:05:59.034 "method": "framework_set_scheduler", 00:05:59.034 "params": { 00:05:59.034 "name": "static" 00:05:59.034 } 00:05:59.034 } 00:05:59.034 ] 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "vhost_scsi", 00:05:59.034 "config": [] 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "vhost_blk", 00:05:59.034 "config": [] 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "ublk", 00:05:59.034 "config": [] 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "nbd", 00:05:59.034 "config": [] 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "nvmf", 00:05:59.034 "config": [ 00:05:59.034 { 00:05:59.034 "method": "nvmf_set_config", 00:05:59.034 "params": { 00:05:59.034 "discovery_filter": "match_any", 00:05:59.034 "admin_cmd_passthru": { 00:05:59.034 "identify_ctrlr": false 00:05:59.034 }, 00:05:59.034 "dhchap_digests": [ 00:05:59.034 "sha256", 00:05:59.034 "sha384", 00:05:59.034 "sha512" 00:05:59.034 ], 00:05:59.034 "dhchap_dhgroups": [ 00:05:59.034 "null", 00:05:59.034 "ffdhe2048", 00:05:59.034 "ffdhe3072", 00:05:59.034 "ffdhe4096", 00:05:59.034 "ffdhe6144", 00:05:59.034 "ffdhe8192" 00:05:59.034 ] 00:05:59.034 } 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "method": "nvmf_set_max_subsystems", 00:05:59.034 "params": { 00:05:59.034 "max_subsystems": 1024 00:05:59.034 } 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "method": "nvmf_set_crdt", 00:05:59.034 "params": { 00:05:59.034 "crdt1": 0, 00:05:59.034 "crdt2": 0, 00:05:59.034 "crdt3": 0 00:05:59.034 } 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "method": "nvmf_create_transport", 00:05:59.034 "params": { 00:05:59.034 "trtype": "TCP", 00:05:59.034 "max_queue_depth": 128, 00:05:59.034 "max_io_qpairs_per_ctrlr": 127, 00:05:59.034 "in_capsule_data_size": 4096, 00:05:59.034 "max_io_size": 131072, 00:05:59.034 "io_unit_size": 131072, 00:05:59.034 "max_aq_depth": 128, 00:05:59.034 "num_shared_buffers": 511, 00:05:59.034 "buf_cache_size": 4294967295, 00:05:59.034 "dif_insert_or_strip": false, 00:05:59.034 "zcopy": false, 00:05:59.034 "c2h_success": true, 00:05:59.034 "sock_priority": 0, 00:05:59.034 "abort_timeout_sec": 1, 00:05:59.034 "ack_timeout": 0, 00:05:59.034 "data_wr_pool_size": 0 00:05:59.034 } 00:05:59.034 } 00:05:59.034 ] 00:05:59.034 }, 00:05:59.034 { 00:05:59.034 "subsystem": "iscsi", 00:05:59.034 "config": [ 00:05:59.034 { 00:05:59.034 "method": "iscsi_set_options", 00:05:59.034 "params": { 00:05:59.034 "node_base": "iqn.2016-06.io.spdk", 00:05:59.034 "max_sessions": 128, 00:05:59.034 "max_connections_per_session": 2, 00:05:59.034 "max_queue_depth": 64, 00:05:59.034 "default_time2wait": 2, 00:05:59.034 "default_time2retain": 20, 00:05:59.034 "first_burst_length": 8192, 00:05:59.034 "immediate_data": true, 00:05:59.034 "allow_duplicated_isid": false, 00:05:59.034 "error_recovery_level": 0, 00:05:59.034 "nop_timeout": 60, 00:05:59.034 "nop_in_interval": 30, 00:05:59.034 "disable_chap": false, 00:05:59.034 "require_chap": false, 00:05:59.034 "mutual_chap": false, 00:05:59.034 "chap_group": 0, 00:05:59.034 "max_large_datain_per_connection": 64, 00:05:59.034 "max_r2t_per_connection": 4, 00:05:59.034 "pdu_pool_size": 36864, 00:05:59.034 "immediate_data_pool_size": 16384, 00:05:59.034 "data_out_pool_size": 2048 00:05:59.034 } 00:05:59.034 } 00:05:59.034 ] 00:05:59.034 } 00:05:59.034 ] 00:05:59.034 } 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69748 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69748 ']' 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69748 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69748 00:05:59.034 killing process with pid 69748 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69748' 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69748 00:05:59.034 10:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69748 00:05:59.292 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69772 00:05:59.292 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.292 10:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69772 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69772 ']' 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69772 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69772 00:06:04.555 killing process with pid 69772 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69772' 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69772 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69772 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.555 ************************************ 00:06:04.555 END TEST skip_rpc_with_json 00:06:04.555 00:06:04.555 real 0m6.582s 00:06:04.555 user 0m6.317s 00:06:04.555 sys 0m0.486s 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.555 ************************************ 00:06:04.555 10:00:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:04.555 10:00:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.555 10:00:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.555 10:00:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.555 ************************************ 00:06:04.555 START TEST skip_rpc_with_delay 00:06:04.555 ************************************ 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.555 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.555 [2024-11-03 10:00:32.870970] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:04.555 [2024-11-03 10:00:32.871084] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:04.812 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:04.812 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.812 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.812 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.812 00:06:04.812 real 0m0.119s 00:06:04.812 user 0m0.065s 00:06:04.812 sys 0m0.053s 00:06:04.812 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.812 ************************************ 00:06:04.812 END TEST skip_rpc_with_delay 00:06:04.812 ************************************ 00:06:04.812 10:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:04.812 10:00:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:04.812 10:00:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:04.812 10:00:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:04.812 10:00:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.812 10:00:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.812 10:00:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.812 ************************************ 00:06:04.812 START TEST exit_on_failed_rpc_init 00:06:04.812 ************************************ 00:06:04.812 10:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:04.812 10:00:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69884 00:06:04.812 10:00:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69884 00:06:04.813 10:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69884 ']' 00:06:04.813 10:00:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.813 10:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.813 10:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.813 10:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.813 10:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.813 10:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.813 [2024-11-03 10:00:33.042498] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:04.813 [2024-11-03 10:00:33.042772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69884 ] 00:06:05.070 [2024-11-03 10:00:33.178351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.070 [2024-11-03 10:00:33.211046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.635 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.635 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:05.635 10:00:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.635 10:00:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.635 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:05.635 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:05.636 10:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.636 [2024-11-03 10:00:33.950511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:05.636 [2024-11-03 10:00:33.950624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69896 ] 00:06:05.894 [2024-11-03 10:00:34.083298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.894 [2024-11-03 10:00:34.115663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.894 [2024-11-03 10:00:34.115746] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:05.894 [2024-11-03 10:00:34.115765] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:05.894 [2024-11-03 10:00:34.115793] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69884 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69884 ']' 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69884 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69884 00:06:05.894 killing process with pid 69884 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69884' 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69884 00:06:05.894 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69884 00:06:06.152 00:06:06.152 real 0m1.504s 00:06:06.152 user 0m1.653s 00:06:06.152 sys 0m0.365s 00:06:06.152 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.152 ************************************ 00:06:06.152 END TEST exit_on_failed_rpc_init 00:06:06.152 ************************************ 00:06:06.152 10:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.410 10:00:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.411 00:06:06.411 real 0m13.850s 00:06:06.411 user 0m13.110s 00:06:06.411 sys 0m1.306s 00:06:06.411 ************************************ 00:06:06.411 END TEST skip_rpc 00:06:06.411 ************************************ 00:06:06.411 10:00:34 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.411 10:00:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.411 10:00:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.411 10:00:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.411 10:00:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.411 10:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:06.411 ************************************ 00:06:06.411 START TEST rpc_client 00:06:06.411 ************************************ 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.411 * Looking for test storage... 00:06:06.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.411 10:00:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.411 --rc genhtml_branch_coverage=1 00:06:06.411 --rc genhtml_function_coverage=1 00:06:06.411 --rc genhtml_legend=1 00:06:06.411 --rc geninfo_all_blocks=1 00:06:06.411 --rc geninfo_unexecuted_blocks=1 00:06:06.411 00:06:06.411 ' 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.411 --rc genhtml_branch_coverage=1 00:06:06.411 --rc genhtml_function_coverage=1 00:06:06.411 --rc genhtml_legend=1 00:06:06.411 --rc geninfo_all_blocks=1 00:06:06.411 --rc geninfo_unexecuted_blocks=1 00:06:06.411 00:06:06.411 ' 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.411 --rc genhtml_branch_coverage=1 00:06:06.411 --rc genhtml_function_coverage=1 00:06:06.411 --rc genhtml_legend=1 00:06:06.411 --rc geninfo_all_blocks=1 00:06:06.411 --rc geninfo_unexecuted_blocks=1 00:06:06.411 00:06:06.411 ' 00:06:06.411 10:00:34 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.411 --rc genhtml_branch_coverage=1 00:06:06.411 --rc genhtml_function_coverage=1 00:06:06.411 --rc genhtml_legend=1 00:06:06.411 --rc geninfo_all_blocks=1 00:06:06.411 --rc geninfo_unexecuted_blocks=1 00:06:06.411 00:06:06.411 ' 00:06:06.411 10:00:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:06.411 OK 00:06:06.411 10:00:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:06.669 ************************************ 00:06:06.669 END TEST rpc_client 00:06:06.669 ************************************ 00:06:06.669 00:06:06.669 real 0m0.190s 00:06:06.669 user 0m0.113s 00:06:06.669 sys 0m0.080s 00:06:06.669 10:00:34 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.670 10:00:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:06.670 10:00:34 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.670 10:00:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.670 10:00:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.670 10:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:06.670 ************************************ 00:06:06.670 START TEST json_config 00:06:06.670 ************************************ 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.670 10:00:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.670 10:00:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.670 10:00:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.670 10:00:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.670 10:00:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.670 10:00:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.670 10:00:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.670 10:00:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.670 10:00:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.670 10:00:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.670 10:00:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.670 10:00:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:06.670 10:00:34 json_config -- scripts/common.sh@345 -- # : 1 00:06:06.670 10:00:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.670 10:00:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.670 10:00:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:06.670 10:00:34 json_config -- scripts/common.sh@353 -- # local d=1 00:06:06.670 10:00:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.670 10:00:34 json_config -- scripts/common.sh@355 -- # echo 1 00:06:06.670 10:00:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.670 10:00:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:06.670 10:00:34 json_config -- scripts/common.sh@353 -- # local d=2 00:06:06.670 10:00:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.670 10:00:34 json_config -- scripts/common.sh@355 -- # echo 2 00:06:06.670 10:00:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.670 10:00:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.670 10:00:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.670 10:00:34 json_config -- scripts/common.sh@368 -- # return 0 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.670 --rc genhtml_branch_coverage=1 00:06:06.670 --rc genhtml_function_coverage=1 00:06:06.670 --rc genhtml_legend=1 00:06:06.670 --rc geninfo_all_blocks=1 00:06:06.670 --rc geninfo_unexecuted_blocks=1 00:06:06.670 00:06:06.670 ' 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.670 --rc genhtml_branch_coverage=1 00:06:06.670 --rc genhtml_function_coverage=1 00:06:06.670 --rc genhtml_legend=1 00:06:06.670 --rc geninfo_all_blocks=1 00:06:06.670 --rc geninfo_unexecuted_blocks=1 00:06:06.670 00:06:06.670 ' 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.670 --rc genhtml_branch_coverage=1 00:06:06.670 --rc genhtml_function_coverage=1 00:06:06.670 --rc genhtml_legend=1 00:06:06.670 --rc geninfo_all_blocks=1 00:06:06.670 --rc geninfo_unexecuted_blocks=1 00:06:06.670 00:06:06.670 ' 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.670 --rc genhtml_branch_coverage=1 00:06:06.670 --rc genhtml_function_coverage=1 00:06:06.670 --rc genhtml_legend=1 00:06:06.670 --rc geninfo_all_blocks=1 00:06:06.670 --rc geninfo_unexecuted_blocks=1 00:06:06.670 00:06:06.670 ' 00:06:06.670 10:00:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:314d78e8-5733-4db7-8146-5db03a3e62c6 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=314d78e8-5733-4db7-8146-5db03a3e62c6 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.670 10:00:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.670 10:00:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.670 10:00:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.670 10:00:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.670 10:00:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.670 10:00:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.670 10:00:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.670 10:00:34 json_config -- paths/export.sh@5 -- # export PATH 00:06:06.670 10:00:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@51 -- # : 0 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.670 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.670 10:00:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.670 10:00:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.670 10:00:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:06.670 10:00:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:06.670 10:00:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:06.670 10:00:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:06.670 10:00:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:06.670 WARNING: No tests are enabled so not running JSON configuration tests 00:06:06.670 10:00:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:06.670 00:06:06.670 real 0m0.143s 00:06:06.670 user 0m0.083s 00:06:06.670 sys 0m0.055s 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.670 10:00:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.670 ************************************ 00:06:06.670 END TEST json_config 00:06:06.670 ************************************ 00:06:06.670 10:00:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.670 10:00:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.670 10:00:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.670 10:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:06.670 ************************************ 00:06:06.670 START TEST json_config_extra_key 00:06:06.670 ************************************ 00:06:06.670 10:00:35 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.929 10:00:35 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.929 10:00:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.929 10:00:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.929 10:00:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.929 10:00:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.929 10:00:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.929 10:00:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.929 10:00:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.929 10:00:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.929 10:00:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.929 10:00:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.929 10:00:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:06.930 10:00:35 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.930 10:00:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.930 --rc genhtml_branch_coverage=1 00:06:06.930 --rc genhtml_function_coverage=1 00:06:06.930 --rc genhtml_legend=1 00:06:06.930 --rc geninfo_all_blocks=1 00:06:06.930 --rc geninfo_unexecuted_blocks=1 00:06:06.930 00:06:06.930 ' 00:06:06.930 10:00:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.930 --rc genhtml_branch_coverage=1 00:06:06.930 --rc genhtml_function_coverage=1 00:06:06.930 --rc genhtml_legend=1 00:06:06.930 --rc geninfo_all_blocks=1 00:06:06.930 --rc geninfo_unexecuted_blocks=1 00:06:06.930 00:06:06.930 ' 00:06:06.930 10:00:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.930 --rc genhtml_branch_coverage=1 00:06:06.930 --rc genhtml_function_coverage=1 00:06:06.930 --rc genhtml_legend=1 00:06:06.930 --rc geninfo_all_blocks=1 00:06:06.930 --rc geninfo_unexecuted_blocks=1 00:06:06.930 00:06:06.930 ' 00:06:06.930 10:00:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.930 --rc genhtml_branch_coverage=1 00:06:06.930 --rc genhtml_function_coverage=1 00:06:06.930 --rc genhtml_legend=1 00:06:06.930 --rc geninfo_all_blocks=1 00:06:06.930 --rc geninfo_unexecuted_blocks=1 00:06:06.930 00:06:06.930 ' 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:314d78e8-5733-4db7-8146-5db03a3e62c6 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=314d78e8-5733-4db7-8146-5db03a3e62c6 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.930 10:00:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.930 10:00:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.930 10:00:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.930 10:00:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.930 10:00:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:06.930 10:00:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.930 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.930 10:00:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:06.930 INFO: launching applications... 00:06:06.930 10:00:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70079 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.930 Waiting for target to run... 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.930 10:00:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70079 /var/tmp/spdk_tgt.sock 00:06:06.930 10:00:35 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70079 ']' 00:06:06.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.930 10:00:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.930 10:00:35 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.931 10:00:35 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.931 10:00:35 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.931 10:00:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.931 [2024-11-03 10:00:35.230556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:06.931 [2024-11-03 10:00:35.230682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70079 ] 00:06:07.189 [2024-11-03 10:00:35.525988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.189 [2024-11-03 10:00:35.544677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.755 00:06:07.755 INFO: shutting down applications... 00:06:07.755 10:00:36 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.755 10:00:36 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:07.755 10:00:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:07.755 10:00:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70079 ]] 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70079 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70079 00:06:07.755 10:00:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.322 10:00:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.322 10:00:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.322 10:00:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70079 00:06:08.322 10:00:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.322 10:00:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:08.322 10:00:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.322 SPDK target shutdown done 00:06:08.322 10:00:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.322 Success 00:06:08.322 10:00:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:08.322 00:06:08.322 real 0m1.552s 00:06:08.322 user 0m1.290s 00:06:08.322 sys 0m0.310s 00:06:08.322 ************************************ 00:06:08.322 END TEST json_config_extra_key 00:06:08.322 ************************************ 00:06:08.322 10:00:36 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.322 10:00:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.322 10:00:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:08.322 10:00:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.322 10:00:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.322 10:00:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.322 ************************************ 00:06:08.322 START TEST alias_rpc 00:06:08.322 ************************************ 00:06:08.322 10:00:36 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:08.580 * Looking for test storage... 00:06:08.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:08.580 10:00:36 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.580 10:00:36 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.580 10:00:36 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.580 10:00:36 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:08.580 10:00:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.581 10:00:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:08.581 10:00:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.581 10:00:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.581 10:00:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.581 10:00:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.581 --rc genhtml_branch_coverage=1 00:06:08.581 --rc genhtml_function_coverage=1 00:06:08.581 --rc genhtml_legend=1 00:06:08.581 --rc geninfo_all_blocks=1 00:06:08.581 --rc geninfo_unexecuted_blocks=1 00:06:08.581 00:06:08.581 ' 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.581 --rc genhtml_branch_coverage=1 00:06:08.581 --rc genhtml_function_coverage=1 00:06:08.581 --rc genhtml_legend=1 00:06:08.581 --rc geninfo_all_blocks=1 00:06:08.581 --rc geninfo_unexecuted_blocks=1 00:06:08.581 00:06:08.581 ' 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.581 --rc genhtml_branch_coverage=1 00:06:08.581 --rc genhtml_function_coverage=1 00:06:08.581 --rc genhtml_legend=1 00:06:08.581 --rc geninfo_all_blocks=1 00:06:08.581 --rc geninfo_unexecuted_blocks=1 00:06:08.581 00:06:08.581 ' 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.581 --rc genhtml_branch_coverage=1 00:06:08.581 --rc genhtml_function_coverage=1 00:06:08.581 --rc genhtml_legend=1 00:06:08.581 --rc geninfo_all_blocks=1 00:06:08.581 --rc geninfo_unexecuted_blocks=1 00:06:08.581 00:06:08.581 ' 00:06:08.581 10:00:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.581 10:00:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70152 00:06:08.581 10:00:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70152 00:06:08.581 10:00:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70152 ']' 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.581 10:00:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.581 [2024-11-03 10:00:36.843976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:08.581 [2024-11-03 10:00:36.844101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70152 ] 00:06:08.839 [2024-11-03 10:00:36.980098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.839 [2024-11-03 10:00:37.013049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.405 10:00:37 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.405 10:00:37 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:09.405 10:00:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:09.663 10:00:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70152 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70152 ']' 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70152 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70152 00:06:09.663 killing process with pid 70152 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70152' 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@969 -- # kill 70152 00:06:09.663 10:00:37 alias_rpc -- common/autotest_common.sh@974 -- # wait 70152 00:06:09.922 ************************************ 00:06:09.922 END TEST alias_rpc 00:06:09.922 ************************************ 00:06:09.922 00:06:09.922 real 0m1.565s 00:06:09.922 user 0m1.702s 00:06:09.922 sys 0m0.354s 00:06:09.922 10:00:38 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.922 10:00:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.922 10:00:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:09.922 10:00:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:09.922 10:00:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.922 10:00:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.922 10:00:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.922 ************************************ 00:06:09.922 START TEST spdkcli_tcp 00:06:09.922 ************************************ 00:06:09.922 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:10.180 * Looking for test storage... 00:06:10.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:10.180 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.180 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.180 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.180 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.180 10:00:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:10.180 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.180 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.180 --rc genhtml_branch_coverage=1 00:06:10.180 --rc genhtml_function_coverage=1 00:06:10.180 --rc genhtml_legend=1 00:06:10.180 --rc geninfo_all_blocks=1 00:06:10.180 --rc geninfo_unexecuted_blocks=1 00:06:10.180 00:06:10.180 ' 00:06:10.180 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.180 --rc genhtml_branch_coverage=1 00:06:10.180 --rc genhtml_function_coverage=1 00:06:10.180 --rc genhtml_legend=1 00:06:10.180 --rc geninfo_all_blocks=1 00:06:10.180 --rc geninfo_unexecuted_blocks=1 00:06:10.180 00:06:10.180 ' 00:06:10.180 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.181 --rc genhtml_branch_coverage=1 00:06:10.181 --rc genhtml_function_coverage=1 00:06:10.181 --rc genhtml_legend=1 00:06:10.181 --rc geninfo_all_blocks=1 00:06:10.181 --rc geninfo_unexecuted_blocks=1 00:06:10.181 00:06:10.181 ' 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.181 --rc genhtml_branch_coverage=1 00:06:10.181 --rc genhtml_function_coverage=1 00:06:10.181 --rc genhtml_legend=1 00:06:10.181 --rc geninfo_all_blocks=1 00:06:10.181 --rc geninfo_unexecuted_blocks=1 00:06:10.181 00:06:10.181 ' 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70232 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70232 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70232 ']' 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.181 10:00:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.181 10:00:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:10.181 [2024-11-03 10:00:38.476700] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:10.181 [2024-11-03 10:00:38.476843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70232 ] 00:06:10.439 [2024-11-03 10:00:38.609873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.439 [2024-11-03 10:00:38.643666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.439 [2024-11-03 10:00:38.643704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.006 10:00:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.006 10:00:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:11.006 10:00:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70249 00:06:11.006 10:00:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:11.006 10:00:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:11.264 [ 00:06:11.264 "bdev_malloc_delete", 00:06:11.264 "bdev_malloc_create", 00:06:11.264 "bdev_null_resize", 00:06:11.264 "bdev_null_delete", 00:06:11.264 "bdev_null_create", 00:06:11.264 "bdev_nvme_cuse_unregister", 00:06:11.264 "bdev_nvme_cuse_register", 00:06:11.264 "bdev_opal_new_user", 00:06:11.264 "bdev_opal_set_lock_state", 00:06:11.264 "bdev_opal_delete", 00:06:11.264 "bdev_opal_get_info", 00:06:11.264 "bdev_opal_create", 00:06:11.264 "bdev_nvme_opal_revert", 00:06:11.264 "bdev_nvme_opal_init", 00:06:11.264 "bdev_nvme_send_cmd", 00:06:11.264 "bdev_nvme_set_keys", 00:06:11.264 "bdev_nvme_get_path_iostat", 00:06:11.264 "bdev_nvme_get_mdns_discovery_info", 00:06:11.264 "bdev_nvme_stop_mdns_discovery", 00:06:11.264 "bdev_nvme_start_mdns_discovery", 00:06:11.264 "bdev_nvme_set_multipath_policy", 00:06:11.264 "bdev_nvme_set_preferred_path", 00:06:11.264 "bdev_nvme_get_io_paths", 00:06:11.264 "bdev_nvme_remove_error_injection", 00:06:11.264 "bdev_nvme_add_error_injection", 00:06:11.264 "bdev_nvme_get_discovery_info", 00:06:11.264 "bdev_nvme_stop_discovery", 00:06:11.264 "bdev_nvme_start_discovery", 00:06:11.264 "bdev_nvme_get_controller_health_info", 00:06:11.264 "bdev_nvme_disable_controller", 00:06:11.264 "bdev_nvme_enable_controller", 00:06:11.264 "bdev_nvme_reset_controller", 00:06:11.264 "bdev_nvme_get_transport_statistics", 00:06:11.264 "bdev_nvme_apply_firmware", 00:06:11.264 "bdev_nvme_detach_controller", 00:06:11.264 "bdev_nvme_get_controllers", 00:06:11.264 "bdev_nvme_attach_controller", 00:06:11.264 "bdev_nvme_set_hotplug", 00:06:11.264 "bdev_nvme_set_options", 00:06:11.264 "bdev_passthru_delete", 00:06:11.264 "bdev_passthru_create", 00:06:11.264 "bdev_lvol_set_parent_bdev", 00:06:11.264 "bdev_lvol_set_parent", 00:06:11.264 "bdev_lvol_check_shallow_copy", 00:06:11.264 "bdev_lvol_start_shallow_copy", 00:06:11.264 "bdev_lvol_grow_lvstore", 00:06:11.264 "bdev_lvol_get_lvols", 00:06:11.264 "bdev_lvol_get_lvstores", 00:06:11.264 "bdev_lvol_delete", 00:06:11.264 "bdev_lvol_set_read_only", 00:06:11.264 "bdev_lvol_resize", 00:06:11.264 "bdev_lvol_decouple_parent", 00:06:11.264 "bdev_lvol_inflate", 00:06:11.264 "bdev_lvol_rename", 00:06:11.264 "bdev_lvol_clone_bdev", 00:06:11.264 "bdev_lvol_clone", 00:06:11.264 "bdev_lvol_snapshot", 00:06:11.264 "bdev_lvol_create", 00:06:11.264 "bdev_lvol_delete_lvstore", 00:06:11.264 "bdev_lvol_rename_lvstore", 00:06:11.264 "bdev_lvol_create_lvstore", 00:06:11.264 "bdev_raid_set_options", 00:06:11.265 "bdev_raid_remove_base_bdev", 00:06:11.265 "bdev_raid_add_base_bdev", 00:06:11.265 "bdev_raid_delete", 00:06:11.265 "bdev_raid_create", 00:06:11.265 "bdev_raid_get_bdevs", 00:06:11.265 "bdev_error_inject_error", 00:06:11.265 "bdev_error_delete", 00:06:11.265 "bdev_error_create", 00:06:11.265 "bdev_split_delete", 00:06:11.265 "bdev_split_create", 00:06:11.265 "bdev_delay_delete", 00:06:11.265 "bdev_delay_create", 00:06:11.265 "bdev_delay_update_latency", 00:06:11.265 "bdev_zone_block_delete", 00:06:11.265 "bdev_zone_block_create", 00:06:11.265 "blobfs_create", 00:06:11.265 "blobfs_detect", 00:06:11.265 "blobfs_set_cache_size", 00:06:11.265 "bdev_xnvme_delete", 00:06:11.265 "bdev_xnvme_create", 00:06:11.265 "bdev_aio_delete", 00:06:11.265 "bdev_aio_rescan", 00:06:11.265 "bdev_aio_create", 00:06:11.265 "bdev_ftl_set_property", 00:06:11.265 "bdev_ftl_get_properties", 00:06:11.265 "bdev_ftl_get_stats", 00:06:11.265 "bdev_ftl_unmap", 00:06:11.265 "bdev_ftl_unload", 00:06:11.265 "bdev_ftl_delete", 00:06:11.265 "bdev_ftl_load", 00:06:11.265 "bdev_ftl_create", 00:06:11.265 "bdev_virtio_attach_controller", 00:06:11.265 "bdev_virtio_scsi_get_devices", 00:06:11.265 "bdev_virtio_detach_controller", 00:06:11.265 "bdev_virtio_blk_set_hotplug", 00:06:11.265 "bdev_iscsi_delete", 00:06:11.265 "bdev_iscsi_create", 00:06:11.265 "bdev_iscsi_set_options", 00:06:11.265 "accel_error_inject_error", 00:06:11.265 "ioat_scan_accel_module", 00:06:11.265 "dsa_scan_accel_module", 00:06:11.265 "iaa_scan_accel_module", 00:06:11.265 "keyring_file_remove_key", 00:06:11.265 "keyring_file_add_key", 00:06:11.265 "keyring_linux_set_options", 00:06:11.265 "fsdev_aio_delete", 00:06:11.265 "fsdev_aio_create", 00:06:11.265 "iscsi_get_histogram", 00:06:11.265 "iscsi_enable_histogram", 00:06:11.265 "iscsi_set_options", 00:06:11.265 "iscsi_get_auth_groups", 00:06:11.265 "iscsi_auth_group_remove_secret", 00:06:11.265 "iscsi_auth_group_add_secret", 00:06:11.265 "iscsi_delete_auth_group", 00:06:11.265 "iscsi_create_auth_group", 00:06:11.265 "iscsi_set_discovery_auth", 00:06:11.265 "iscsi_get_options", 00:06:11.265 "iscsi_target_node_request_logout", 00:06:11.265 "iscsi_target_node_set_redirect", 00:06:11.265 "iscsi_target_node_set_auth", 00:06:11.265 "iscsi_target_node_add_lun", 00:06:11.265 "iscsi_get_stats", 00:06:11.265 "iscsi_get_connections", 00:06:11.265 "iscsi_portal_group_set_auth", 00:06:11.265 "iscsi_start_portal_group", 00:06:11.265 "iscsi_delete_portal_group", 00:06:11.265 "iscsi_create_portal_group", 00:06:11.265 "iscsi_get_portal_groups", 00:06:11.265 "iscsi_delete_target_node", 00:06:11.265 "iscsi_target_node_remove_pg_ig_maps", 00:06:11.265 "iscsi_target_node_add_pg_ig_maps", 00:06:11.265 "iscsi_create_target_node", 00:06:11.265 "iscsi_get_target_nodes", 00:06:11.265 "iscsi_delete_initiator_group", 00:06:11.265 "iscsi_initiator_group_remove_initiators", 00:06:11.265 "iscsi_initiator_group_add_initiators", 00:06:11.265 "iscsi_create_initiator_group", 00:06:11.265 "iscsi_get_initiator_groups", 00:06:11.265 "nvmf_set_crdt", 00:06:11.265 "nvmf_set_config", 00:06:11.265 "nvmf_set_max_subsystems", 00:06:11.265 "nvmf_stop_mdns_prr", 00:06:11.265 "nvmf_publish_mdns_prr", 00:06:11.265 "nvmf_subsystem_get_listeners", 00:06:11.265 "nvmf_subsystem_get_qpairs", 00:06:11.265 "nvmf_subsystem_get_controllers", 00:06:11.265 "nvmf_get_stats", 00:06:11.265 "nvmf_get_transports", 00:06:11.265 "nvmf_create_transport", 00:06:11.265 "nvmf_get_targets", 00:06:11.265 "nvmf_delete_target", 00:06:11.265 "nvmf_create_target", 00:06:11.265 "nvmf_subsystem_allow_any_host", 00:06:11.265 "nvmf_subsystem_set_keys", 00:06:11.265 "nvmf_subsystem_remove_host", 00:06:11.265 "nvmf_subsystem_add_host", 00:06:11.265 "nvmf_ns_remove_host", 00:06:11.265 "nvmf_ns_add_host", 00:06:11.265 "nvmf_subsystem_remove_ns", 00:06:11.265 "nvmf_subsystem_set_ns_ana_group", 00:06:11.265 "nvmf_subsystem_add_ns", 00:06:11.265 "nvmf_subsystem_listener_set_ana_state", 00:06:11.265 "nvmf_discovery_get_referrals", 00:06:11.265 "nvmf_discovery_remove_referral", 00:06:11.265 "nvmf_discovery_add_referral", 00:06:11.265 "nvmf_subsystem_remove_listener", 00:06:11.265 "nvmf_subsystem_add_listener", 00:06:11.265 "nvmf_delete_subsystem", 00:06:11.265 "nvmf_create_subsystem", 00:06:11.265 "nvmf_get_subsystems", 00:06:11.265 "env_dpdk_get_mem_stats", 00:06:11.265 "nbd_get_disks", 00:06:11.265 "nbd_stop_disk", 00:06:11.265 "nbd_start_disk", 00:06:11.265 "ublk_recover_disk", 00:06:11.265 "ublk_get_disks", 00:06:11.265 "ublk_stop_disk", 00:06:11.265 "ublk_start_disk", 00:06:11.265 "ublk_destroy_target", 00:06:11.265 "ublk_create_target", 00:06:11.265 "virtio_blk_create_transport", 00:06:11.265 "virtio_blk_get_transports", 00:06:11.265 "vhost_controller_set_coalescing", 00:06:11.265 "vhost_get_controllers", 00:06:11.265 "vhost_delete_controller", 00:06:11.265 "vhost_create_blk_controller", 00:06:11.265 "vhost_scsi_controller_remove_target", 00:06:11.265 "vhost_scsi_controller_add_target", 00:06:11.265 "vhost_start_scsi_controller", 00:06:11.265 "vhost_create_scsi_controller", 00:06:11.265 "thread_set_cpumask", 00:06:11.265 "scheduler_set_options", 00:06:11.265 "framework_get_governor", 00:06:11.265 "framework_get_scheduler", 00:06:11.265 "framework_set_scheduler", 00:06:11.265 "framework_get_reactors", 00:06:11.265 "thread_get_io_channels", 00:06:11.265 "thread_get_pollers", 00:06:11.265 "thread_get_stats", 00:06:11.265 "framework_monitor_context_switch", 00:06:11.265 "spdk_kill_instance", 00:06:11.265 "log_enable_timestamps", 00:06:11.265 "log_get_flags", 00:06:11.265 "log_clear_flag", 00:06:11.265 "log_set_flag", 00:06:11.265 "log_get_level", 00:06:11.265 "log_set_level", 00:06:11.265 "log_get_print_level", 00:06:11.265 "log_set_print_level", 00:06:11.265 "framework_enable_cpumask_locks", 00:06:11.265 "framework_disable_cpumask_locks", 00:06:11.265 "framework_wait_init", 00:06:11.265 "framework_start_init", 00:06:11.265 "scsi_get_devices", 00:06:11.265 "bdev_get_histogram", 00:06:11.265 "bdev_enable_histogram", 00:06:11.265 "bdev_set_qos_limit", 00:06:11.265 "bdev_set_qd_sampling_period", 00:06:11.265 "bdev_get_bdevs", 00:06:11.265 "bdev_reset_iostat", 00:06:11.265 "bdev_get_iostat", 00:06:11.265 "bdev_examine", 00:06:11.265 "bdev_wait_for_examine", 00:06:11.265 "bdev_set_options", 00:06:11.265 "accel_get_stats", 00:06:11.265 "accel_set_options", 00:06:11.265 "accel_set_driver", 00:06:11.265 "accel_crypto_key_destroy", 00:06:11.265 "accel_crypto_keys_get", 00:06:11.265 "accel_crypto_key_create", 00:06:11.265 "accel_assign_opc", 00:06:11.265 "accel_get_module_info", 00:06:11.265 "accel_get_opc_assignments", 00:06:11.265 "vmd_rescan", 00:06:11.265 "vmd_remove_device", 00:06:11.265 "vmd_enable", 00:06:11.265 "sock_get_default_impl", 00:06:11.265 "sock_set_default_impl", 00:06:11.265 "sock_impl_set_options", 00:06:11.265 "sock_impl_get_options", 00:06:11.265 "iobuf_get_stats", 00:06:11.265 "iobuf_set_options", 00:06:11.265 "keyring_get_keys", 00:06:11.265 "framework_get_pci_devices", 00:06:11.265 "framework_get_config", 00:06:11.265 "framework_get_subsystems", 00:06:11.265 "fsdev_set_opts", 00:06:11.265 "fsdev_get_opts", 00:06:11.265 "trace_get_info", 00:06:11.265 "trace_get_tpoint_group_mask", 00:06:11.265 "trace_disable_tpoint_group", 00:06:11.265 "trace_enable_tpoint_group", 00:06:11.265 "trace_clear_tpoint_mask", 00:06:11.265 "trace_set_tpoint_mask", 00:06:11.265 "notify_get_notifications", 00:06:11.265 "notify_get_types", 00:06:11.265 "spdk_get_version", 00:06:11.265 "rpc_get_methods" 00:06:11.265 ] 00:06:11.265 10:00:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.265 10:00:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:11.265 10:00:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70232 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70232 ']' 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70232 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70232 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.265 killing process with pid 70232 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70232' 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70232 00:06:11.265 10:00:39 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70232 00:06:11.523 ************************************ 00:06:11.523 END TEST spdkcli_tcp 00:06:11.523 ************************************ 00:06:11.523 00:06:11.523 real 0m1.584s 00:06:11.523 user 0m2.810s 00:06:11.523 sys 0m0.395s 00:06:11.523 10:00:39 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.523 10:00:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.523 10:00:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.523 10:00:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.523 10:00:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.523 10:00:39 -- common/autotest_common.sh@10 -- # set +x 00:06:11.782 ************************************ 00:06:11.782 START TEST dpdk_mem_utility 00:06:11.782 ************************************ 00:06:11.782 10:00:39 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.782 * Looking for test storage... 00:06:11.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:11.782 10:00:39 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.782 10:00:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.782 10:00:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.782 10:00:40 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.782 --rc genhtml_branch_coverage=1 00:06:11.782 --rc genhtml_function_coverage=1 00:06:11.782 --rc genhtml_legend=1 00:06:11.782 --rc geninfo_all_blocks=1 00:06:11.782 --rc geninfo_unexecuted_blocks=1 00:06:11.782 00:06:11.782 ' 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.782 --rc genhtml_branch_coverage=1 00:06:11.782 --rc genhtml_function_coverage=1 00:06:11.782 --rc genhtml_legend=1 00:06:11.782 --rc geninfo_all_blocks=1 00:06:11.782 --rc geninfo_unexecuted_blocks=1 00:06:11.782 00:06:11.782 ' 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.782 --rc genhtml_branch_coverage=1 00:06:11.782 --rc genhtml_function_coverage=1 00:06:11.782 --rc genhtml_legend=1 00:06:11.782 --rc geninfo_all_blocks=1 00:06:11.782 --rc geninfo_unexecuted_blocks=1 00:06:11.782 00:06:11.782 ' 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.782 --rc genhtml_branch_coverage=1 00:06:11.782 --rc genhtml_function_coverage=1 00:06:11.782 --rc genhtml_legend=1 00:06:11.782 --rc geninfo_all_blocks=1 00:06:11.782 --rc geninfo_unexecuted_blocks=1 00:06:11.782 00:06:11.782 ' 00:06:11.782 10:00:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:11.782 10:00:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70326 00:06:11.782 10:00:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70326 00:06:11.782 10:00:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70326 ']' 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.782 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.782 [2024-11-03 10:00:40.113154] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:11.782 [2024-11-03 10:00:40.113401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70326 ] 00:06:12.040 [2024-11-03 10:00:40.248249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.040 [2024-11-03 10:00:40.281150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.637 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.637 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:12.637 10:00:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:12.637 10:00:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:12.637 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.637 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.637 { 00:06:12.637 "filename": "/tmp/spdk_mem_dump.txt" 00:06:12.637 } 00:06:12.637 10:00:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.637 10:00:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:12.897 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:12.897 1 heaps totaling size 860.000000 MiB 00:06:12.897 size: 860.000000 MiB heap id: 0 00:06:12.897 end heaps---------- 00:06:12.897 9 mempools totaling size 642.649841 MiB 00:06:12.897 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:12.897 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:12.897 size: 92.545471 MiB name: bdev_io_70326 00:06:12.897 size: 51.011292 MiB name: evtpool_70326 00:06:12.897 size: 50.003479 MiB name: msgpool_70326 00:06:12.897 size: 36.509338 MiB name: fsdev_io_70326 00:06:12.897 size: 21.763794 MiB name: PDU_Pool 00:06:12.897 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:12.897 size: 0.026123 MiB name: Session_Pool 00:06:12.897 end mempools------- 00:06:12.897 6 memzones totaling size 4.142822 MiB 00:06:12.897 size: 1.000366 MiB name: RG_ring_0_70326 00:06:12.897 size: 1.000366 MiB name: RG_ring_1_70326 00:06:12.897 size: 1.000366 MiB name: RG_ring_4_70326 00:06:12.897 size: 1.000366 MiB name: RG_ring_5_70326 00:06:12.897 size: 0.125366 MiB name: RG_ring_2_70326 00:06:12.897 size: 0.015991 MiB name: RG_ring_3_70326 00:06:12.897 end memzones------- 00:06:12.897 10:00:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:12.897 heap id: 0 total size: 860.000000 MiB number of busy elements: 304 number of free elements: 16 00:06:12.897 list of free elements. size: 13.937073 MiB 00:06:12.897 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:12.897 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:12.897 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:12.897 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:12.897 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:12.897 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:12.897 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:12.897 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:12.897 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:12.897 element at address: 0x20001d800000 with size: 0.567505 MiB 00:06:12.897 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:12.897 element at address: 0x200003e00000 with size: 0.488831 MiB 00:06:12.897 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:12.897 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:12.897 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:06:12.897 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:12.897 list of standard malloc elements. size: 199.266235 MiB 00:06:12.897 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:12.897 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:12.897 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:12.897 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:12.897 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:12.897 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:12.897 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:12.897 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:12.897 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:12.897 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:12.897 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:12.897 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:12.897 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:12.897 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:12.897 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:12.897 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:12.898 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:12.898 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:12.899 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:12.899 list of memzone associated elements. size: 646.796692 MiB 00:06:12.899 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:12.899 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:12.899 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:12.899 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:12.899 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:12.899 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70326_0 00:06:12.899 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:12.899 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70326_0 00:06:12.899 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:12.899 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70326_0 00:06:12.899 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:12.899 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70326_0 00:06:12.899 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:12.899 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:12.899 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:12.899 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:12.899 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:12.899 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70326 00:06:12.899 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:12.899 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70326 00:06:12.899 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:12.899 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70326 00:06:12.899 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:12.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:12.899 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:12.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:12.899 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:12.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:12.899 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:12.899 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:12.899 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:12.899 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70326 00:06:12.899 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:12.899 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70326 00:06:12.899 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:12.899 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70326 00:06:12.899 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:12.899 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70326 00:06:12.899 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:12.899 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70326 00:06:12.899 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:12.899 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70326 00:06:12.899 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:12.899 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:12.899 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:12.899 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:12.899 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:12.899 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:12.899 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:12.900 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70326 00:06:12.900 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:12.900 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:12.900 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:06:12.900 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:12.900 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:12.900 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70326 00:06:12.900 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:06:12.900 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:12.900 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:12.900 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70326 00:06:12.900 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:12.900 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70326 00:06:12.900 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:12.900 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70326 00:06:12.900 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:06:12.900 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:12.900 10:00:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:12.900 10:00:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70326 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70326 ']' 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70326 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70326 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70326' 00:06:12.900 killing process with pid 70326 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70326 00:06:12.900 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70326 00:06:13.158 00:06:13.158 real 0m1.453s 00:06:13.158 user 0m1.499s 00:06:13.158 sys 0m0.358s 00:06:13.158 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.158 ************************************ 00:06:13.158 10:00:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.158 END TEST dpdk_mem_utility 00:06:13.158 ************************************ 00:06:13.158 10:00:41 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:13.158 10:00:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.158 10:00:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.158 10:00:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.158 ************************************ 00:06:13.158 START TEST event 00:06:13.158 ************************************ 00:06:13.158 10:00:41 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:13.158 * Looking for test storage... 00:06:13.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:13.158 10:00:41 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.158 10:00:41 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.158 10:00:41 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.419 10:00:41 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.419 10:00:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.419 10:00:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.419 10:00:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.419 10:00:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.419 10:00:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.419 10:00:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.419 10:00:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.419 10:00:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.419 10:00:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.419 10:00:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.419 10:00:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.419 10:00:41 event -- scripts/common.sh@344 -- # case "$op" in 00:06:13.419 10:00:41 event -- scripts/common.sh@345 -- # : 1 00:06:13.419 10:00:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.419 10:00:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.419 10:00:41 event -- scripts/common.sh@365 -- # decimal 1 00:06:13.419 10:00:41 event -- scripts/common.sh@353 -- # local d=1 00:06:13.419 10:00:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.419 10:00:41 event -- scripts/common.sh@355 -- # echo 1 00:06:13.419 10:00:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.419 10:00:41 event -- scripts/common.sh@366 -- # decimal 2 00:06:13.419 10:00:41 event -- scripts/common.sh@353 -- # local d=2 00:06:13.419 10:00:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.419 10:00:41 event -- scripts/common.sh@355 -- # echo 2 00:06:13.419 10:00:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.419 10:00:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.419 10:00:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.419 10:00:41 event -- scripts/common.sh@368 -- # return 0 00:06:13.419 10:00:41 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.419 10:00:41 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.419 --rc genhtml_branch_coverage=1 00:06:13.419 --rc genhtml_function_coverage=1 00:06:13.419 --rc genhtml_legend=1 00:06:13.419 --rc geninfo_all_blocks=1 00:06:13.419 --rc geninfo_unexecuted_blocks=1 00:06:13.419 00:06:13.419 ' 00:06:13.419 10:00:41 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.419 --rc genhtml_branch_coverage=1 00:06:13.419 --rc genhtml_function_coverage=1 00:06:13.419 --rc genhtml_legend=1 00:06:13.419 --rc geninfo_all_blocks=1 00:06:13.419 --rc geninfo_unexecuted_blocks=1 00:06:13.419 00:06:13.420 ' 00:06:13.420 10:00:41 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.420 --rc genhtml_branch_coverage=1 00:06:13.420 --rc genhtml_function_coverage=1 00:06:13.420 --rc genhtml_legend=1 00:06:13.420 --rc geninfo_all_blocks=1 00:06:13.420 --rc geninfo_unexecuted_blocks=1 00:06:13.420 00:06:13.420 ' 00:06:13.420 10:00:41 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.420 --rc genhtml_branch_coverage=1 00:06:13.420 --rc genhtml_function_coverage=1 00:06:13.420 --rc genhtml_legend=1 00:06:13.420 --rc geninfo_all_blocks=1 00:06:13.420 --rc geninfo_unexecuted_blocks=1 00:06:13.420 00:06:13.420 ' 00:06:13.420 10:00:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:13.420 10:00:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:13.420 10:00:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.420 10:00:41 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:13.420 10:00:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.420 10:00:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.420 ************************************ 00:06:13.420 START TEST event_perf 00:06:13.420 ************************************ 00:06:13.420 10:00:41 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.420 Running I/O for 1 seconds...[2024-11-03 10:00:41.597078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:13.420 [2024-11-03 10:00:41.597186] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70407 ] 00:06:13.420 [2024-11-03 10:00:41.733002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.420 [2024-11-03 10:00:41.767956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.420 [2024-11-03 10:00:41.768267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.420 [2024-11-03 10:00:41.768446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.420 Running I/O for 1 seconds...[2024-11-03 10:00:41.768634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.798 00:06:14.798 lcore 0: 169021 00:06:14.798 lcore 1: 169022 00:06:14.798 lcore 2: 169019 00:06:14.798 lcore 3: 169022 00:06:14.798 done. 00:06:14.798 00:06:14.798 real 0m1.256s 00:06:14.798 user 0m4.061s 00:06:14.798 sys 0m0.075s 00:06:14.798 10:00:42 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.798 ************************************ 00:06:14.798 END TEST event_perf 00:06:14.798 10:00:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.798 ************************************ 00:06:14.798 10:00:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:14.798 10:00:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:14.798 10:00:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.798 10:00:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.798 ************************************ 00:06:14.798 START TEST event_reactor 00:06:14.798 ************************************ 00:06:14.798 10:00:42 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:14.798 [2024-11-03 10:00:42.912179] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:14.798 [2024-11-03 10:00:42.912410] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70446 ] 00:06:14.798 [2024-11-03 10:00:43.047918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.798 [2024-11-03 10:00:43.079319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.185 test_start 00:06:16.185 oneshot 00:06:16.185 tick 100 00:06:16.185 tick 100 00:06:16.185 tick 250 00:06:16.185 tick 100 00:06:16.185 tick 100 00:06:16.185 tick 100 00:06:16.185 tick 250 00:06:16.185 tick 500 00:06:16.185 tick 100 00:06:16.185 tick 100 00:06:16.185 tick 250 00:06:16.185 tick 100 00:06:16.185 tick 100 00:06:16.185 test_end 00:06:16.185 00:06:16.185 real 0m1.249s 00:06:16.185 user 0m1.081s 00:06:16.185 sys 0m0.059s 00:06:16.185 10:00:44 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.185 ************************************ 00:06:16.185 10:00:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:16.185 END TEST event_reactor 00:06:16.185 ************************************ 00:06:16.185 10:00:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.185 10:00:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:16.185 10:00:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.185 10:00:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.185 ************************************ 00:06:16.185 START TEST event_reactor_perf 00:06:16.185 ************************************ 00:06:16.185 10:00:44 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.185 [2024-11-03 10:00:44.223564] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:16.185 [2024-11-03 10:00:44.223674] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70483 ] 00:06:16.185 [2024-11-03 10:00:44.359623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.185 [2024-11-03 10:00:44.391125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.119 test_start 00:06:17.119 test_end 00:06:17.119 Performance: 312664 events per second 00:06:17.119 00:06:17.119 real 0m1.247s 00:06:17.119 user 0m1.081s 00:06:17.119 sys 0m0.059s 00:06:17.119 ************************************ 00:06:17.119 END TEST event_reactor_perf 00:06:17.119 ************************************ 00:06:17.119 10:00:45 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.119 10:00:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.377 10:00:45 event -- event/event.sh@49 -- # uname -s 00:06:17.377 10:00:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:17.377 10:00:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:17.377 10:00:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.377 10:00:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.377 10:00:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.377 ************************************ 00:06:17.377 START TEST event_scheduler 00:06:17.377 ************************************ 00:06:17.377 10:00:45 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:17.377 * Looking for test storage... 00:06:17.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:17.377 10:00:45 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:17.377 10:00:45 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:17.377 10:00:45 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:17.377 10:00:45 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.377 10:00:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:17.377 10:00:45 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.377 10:00:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:17.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.378 --rc genhtml_branch_coverage=1 00:06:17.378 --rc genhtml_function_coverage=1 00:06:17.378 --rc genhtml_legend=1 00:06:17.378 --rc geninfo_all_blocks=1 00:06:17.378 --rc geninfo_unexecuted_blocks=1 00:06:17.378 00:06:17.378 ' 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:17.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.378 --rc genhtml_branch_coverage=1 00:06:17.378 --rc genhtml_function_coverage=1 00:06:17.378 --rc genhtml_legend=1 00:06:17.378 --rc geninfo_all_blocks=1 00:06:17.378 --rc geninfo_unexecuted_blocks=1 00:06:17.378 00:06:17.378 ' 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:17.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.378 --rc genhtml_branch_coverage=1 00:06:17.378 --rc genhtml_function_coverage=1 00:06:17.378 --rc genhtml_legend=1 00:06:17.378 --rc geninfo_all_blocks=1 00:06:17.378 --rc geninfo_unexecuted_blocks=1 00:06:17.378 00:06:17.378 ' 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:17.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.378 --rc genhtml_branch_coverage=1 00:06:17.378 --rc genhtml_function_coverage=1 00:06:17.378 --rc genhtml_legend=1 00:06:17.378 --rc geninfo_all_blocks=1 00:06:17.378 --rc geninfo_unexecuted_blocks=1 00:06:17.378 00:06:17.378 ' 00:06:17.378 10:00:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:17.378 10:00:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70548 00:06:17.378 10:00:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.378 10:00:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70548 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70548 ']' 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.378 10:00:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.378 10:00:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.378 [2024-11-03 10:00:45.709833] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:17.378 [2024-11-03 10:00:45.710104] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70548 ] 00:06:17.636 [2024-11-03 10:00:45.845527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.636 [2024-11-03 10:00:45.880948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.636 [2024-11-03 10:00:45.881298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.636 [2024-11-03 10:00:45.881507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.636 [2024-11-03 10:00:45.881537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.203 10:00:46 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.203 10:00:46 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:18.203 10:00:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:18.203 10:00:46 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.203 10:00:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.203 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.203 POWER: Cannot set governor of lcore 0 to userspace 00:06:18.203 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.203 POWER: Cannot set governor of lcore 0 to performance 00:06:18.203 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:18.203 POWER: Cannot set governor of lcore 0 to userspace 00:06:18.203 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:18.203 POWER: Unable to set Power Management Environment for lcore 0 00:06:18.203 [2024-11-03 10:00:46.550974] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:18.203 [2024-11-03 10:00:46.550993] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:18.203 [2024-11-03 10:00:46.551021] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:18.203 [2024-11-03 10:00:46.551047] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:18.203 [2024-11-03 10:00:46.551055] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:18.203 [2024-11-03 10:00:46.551063] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:18.203 10:00:46 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.203 10:00:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:18.203 10:00:46 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.203 10:00:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 [2024-11-03 10:00:46.606531] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.463 10:00:46 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.463 10:00:46 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.463 10:00:46 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 ************************************ 00:06:18.463 START TEST scheduler_create_thread 00:06:18.463 ************************************ 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 2 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 3 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 4 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 5 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 6 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 7 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 8 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 9 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 10 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.463 10:00:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.398 10:00:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.398 10:00:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.398 10:00:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.398 10:00:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.825 10:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.825 10:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:20.825 10:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:20.825 10:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.825 10:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.763 ************************************ 00:06:21.764 END TEST scheduler_create_thread 00:06:21.764 ************************************ 00:06:21.764 10:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.764 00:06:21.764 real 0m3.372s 00:06:21.764 user 0m0.015s 00:06:21.764 sys 0m0.007s 00:06:21.764 10:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.764 10:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.764 10:00:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:21.764 10:00:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70548 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70548 ']' 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70548 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70548 00:06:21.764 killing process with pid 70548 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70548' 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70548 00:06:21.764 10:00:50 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70548 00:06:22.025 [2024-11-03 10:00:50.373773] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:22.286 00:06:22.286 real 0m5.055s 00:06:22.286 user 0m10.150s 00:06:22.286 sys 0m0.310s 00:06:22.286 ************************************ 00:06:22.286 END TEST event_scheduler 00:06:22.286 ************************************ 00:06:22.286 10:00:50 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.286 10:00:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.286 10:00:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:22.286 10:00:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:22.286 10:00:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.286 10:00:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.286 10:00:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.286 ************************************ 00:06:22.286 START TEST app_repeat 00:06:22.286 ************************************ 00:06:22.286 10:00:50 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:22.286 Process app_repeat pid: 70654 00:06:22.286 spdk_app_start Round 0 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70654 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70654' 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70654 /var/tmp/spdk-nbd.sock 00:06:22.286 10:00:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:22.286 10:00:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70654 ']' 00:06:22.286 10:00:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.286 10:00:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.286 10:00:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.286 10:00:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.286 10:00:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.547 [2024-11-03 10:00:50.664508] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:22.547 [2024-11-03 10:00:50.664611] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70654 ] 00:06:22.547 [2024-11-03 10:00:50.803565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.547 [2024-11-03 10:00:50.855957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.547 [2024-11-03 10:00:50.856044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.489 10:00:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.489 10:00:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:23.489 10:00:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.489 Malloc0 00:06:23.489 10:00:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.750 Malloc1 00:06:23.750 10:00:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.750 10:00:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.011 /dev/nbd0 00:06:24.011 10:00:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.011 10:00:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.011 1+0 records in 00:06:24.011 1+0 records out 00:06:24.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250762 s, 16.3 MB/s 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:24.011 10:00:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:24.011 10:00:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.011 10:00:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.011 10:00:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.272 /dev/nbd1 00:06:24.272 10:00:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.272 10:00:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.272 1+0 records in 00:06:24.272 1+0 records out 00:06:24.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329462 s, 12.4 MB/s 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:24.272 10:00:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:24.272 10:00:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.272 10:00:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.272 10:00:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.272 10:00:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.272 10:00:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.533 { 00:06:24.533 "nbd_device": "/dev/nbd0", 00:06:24.533 "bdev_name": "Malloc0" 00:06:24.533 }, 00:06:24.533 { 00:06:24.533 "nbd_device": "/dev/nbd1", 00:06:24.533 "bdev_name": "Malloc1" 00:06:24.533 } 00:06:24.533 ]' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.533 { 00:06:24.533 "nbd_device": "/dev/nbd0", 00:06:24.533 "bdev_name": "Malloc0" 00:06:24.533 }, 00:06:24.533 { 00:06:24.533 "nbd_device": "/dev/nbd1", 00:06:24.533 "bdev_name": "Malloc1" 00:06:24.533 } 00:06:24.533 ]' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.533 /dev/nbd1' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.533 /dev/nbd1' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.533 256+0 records in 00:06:24.533 256+0 records out 00:06:24.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0098294 s, 107 MB/s 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.533 256+0 records in 00:06:24.533 256+0 records out 00:06:24.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161663 s, 64.9 MB/s 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.533 256+0 records in 00:06:24.533 256+0 records out 00:06:24.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206683 s, 50.7 MB/s 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.533 10:00:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.794 10:00:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.052 10:00:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.052 10:00:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.052 10:00:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.052 10:00:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.052 10:00:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.052 10:00:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.052 10:00:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.052 10:00:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.053 10:00:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.053 10:00:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.053 10:00:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.312 10:00:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.312 10:00:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.570 10:00:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.570 [2024-11-03 10:00:53.812692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.570 [2024-11-03 10:00:53.846049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.570 [2024-11-03 10:00:53.846206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.570 [2024-11-03 10:00:53.880612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.570 [2024-11-03 10:00:53.880664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.857 spdk_app_start Round 1 00:06:28.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.857 10:00:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.857 10:00:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:28.857 10:00:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70654 /var/tmp/spdk-nbd.sock 00:06:28.857 10:00:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70654 ']' 00:06:28.857 10:00:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.857 10:00:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.857 10:00:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.857 10:00:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.857 10:00:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.857 10:00:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.857 10:00:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:28.857 10:00:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.857 Malloc0 00:06:28.857 10:00:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.117 Malloc1 00:06:29.117 10:00:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.117 10:00:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.375 /dev/nbd0 00:06:29.375 10:00:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.375 10:00:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.375 1+0 records in 00:06:29.375 1+0 records out 00:06:29.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189835 s, 21.6 MB/s 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.375 10:00:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:29.375 10:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.375 10:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.375 10:00:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.634 /dev/nbd1 00:06:29.634 10:00:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.634 10:00:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.634 1+0 records in 00:06:29.634 1+0 records out 00:06:29.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196368 s, 20.9 MB/s 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.634 10:00:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:29.634 10:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.634 10:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.634 10:00:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.634 10:00:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.634 10:00:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.892 10:00:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.892 { 00:06:29.892 "nbd_device": "/dev/nbd0", 00:06:29.892 "bdev_name": "Malloc0" 00:06:29.892 }, 00:06:29.892 { 00:06:29.892 "nbd_device": "/dev/nbd1", 00:06:29.892 "bdev_name": "Malloc1" 00:06:29.892 } 00:06:29.892 ]' 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.892 { 00:06:29.892 "nbd_device": "/dev/nbd0", 00:06:29.892 "bdev_name": "Malloc0" 00:06:29.892 }, 00:06:29.892 { 00:06:29.892 "nbd_device": "/dev/nbd1", 00:06:29.892 "bdev_name": "Malloc1" 00:06:29.892 } 00:06:29.892 ]' 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.892 /dev/nbd1' 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.892 /dev/nbd1' 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.892 256+0 records in 00:06:29.892 256+0 records out 00:06:29.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596255 s, 176 MB/s 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.892 256+0 records in 00:06:29.892 256+0 records out 00:06:29.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136755 s, 76.7 MB/s 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.892 256+0 records in 00:06:29.892 256+0 records out 00:06:29.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154511 s, 67.9 MB/s 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.892 10:00:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.893 10:00:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.153 10:00:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.413 10:00:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.413 10:00:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.671 10:00:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.930 [2024-11-03 10:00:59.039547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.930 [2024-11-03 10:00:59.066071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.930 [2024-11-03 10:00:59.066155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.930 [2024-11-03 10:00:59.094636] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.930 [2024-11-03 10:00:59.094682] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.225 spdk_app_start Round 2 00:06:34.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.225 10:01:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.225 10:01:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:34.225 10:01:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70654 /var/tmp/spdk-nbd.sock 00:06:34.225 10:01:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70654 ']' 00:06:34.225 10:01:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.225 10:01:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.225 10:01:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.225 10:01:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.225 10:01:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.225 10:01:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.225 10:01:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:34.225 10:01:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.225 Malloc0 00:06:34.225 10:01:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.225 Malloc1 00:06:34.486 10:01:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.486 /dev/nbd0 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.486 1+0 records in 00:06:34.486 1+0 records out 00:06:34.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335442 s, 12.2 MB/s 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:34.486 10:01:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.486 10:01:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.746 /dev/nbd1 00:06:34.746 10:01:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.746 10:01:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.746 1+0 records in 00:06:34.746 1+0 records out 00:06:34.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192799 s, 21.2 MB/s 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:34.746 10:01:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:34.746 10:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.746 10:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.746 10:01:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.746 10:01:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.746 10:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.006 { 00:06:35.006 "nbd_device": "/dev/nbd0", 00:06:35.006 "bdev_name": "Malloc0" 00:06:35.006 }, 00:06:35.006 { 00:06:35.006 "nbd_device": "/dev/nbd1", 00:06:35.006 "bdev_name": "Malloc1" 00:06:35.006 } 00:06:35.006 ]' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.006 { 00:06:35.006 "nbd_device": "/dev/nbd0", 00:06:35.006 "bdev_name": "Malloc0" 00:06:35.006 }, 00:06:35.006 { 00:06:35.006 "nbd_device": "/dev/nbd1", 00:06:35.006 "bdev_name": "Malloc1" 00:06:35.006 } 00:06:35.006 ]' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.006 /dev/nbd1' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.006 /dev/nbd1' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.006 256+0 records in 00:06:35.006 256+0 records out 00:06:35.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0099367 s, 106 MB/s 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.006 256+0 records in 00:06:35.006 256+0 records out 00:06:35.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162468 s, 64.5 MB/s 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.006 256+0 records in 00:06:35.006 256+0 records out 00:06:35.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167126 s, 62.7 MB/s 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.006 10:01:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.007 10:01:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.266 10:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.267 10:01:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.527 10:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.788 10:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.788 10:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.788 10:01:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.788 10:01:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.788 10:01:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.049 10:01:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:36.049 [2024-11-03 10:01:04.302862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.049 [2024-11-03 10:01:04.329676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.049 [2024-11-03 10:01:04.329773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.049 [2024-11-03 10:01:04.358633] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.049 [2024-11-03 10:01:04.358669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.406 10:01:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70654 /var/tmp/spdk-nbd.sock 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70654 ']' 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:39.406 10:01:07 event.app_repeat -- event/event.sh@39 -- # killprocess 70654 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70654 ']' 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70654 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70654 00:06:39.406 killing process with pid 70654 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70654' 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70654 00:06:39.406 10:01:07 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70654 00:06:39.406 spdk_app_start is called in Round 0. 00:06:39.406 Shutdown signal received, stop current app iteration 00:06:39.406 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:39.407 spdk_app_start is called in Round 1. 00:06:39.407 Shutdown signal received, stop current app iteration 00:06:39.407 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:39.407 spdk_app_start is called in Round 2. 00:06:39.407 Shutdown signal received, stop current app iteration 00:06:39.407 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:39.407 spdk_app_start is called in Round 3. 00:06:39.407 Shutdown signal received, stop current app iteration 00:06:39.407 ************************************ 00:06:39.407 END TEST app_repeat 00:06:39.407 ************************************ 00:06:39.407 10:01:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:39.407 10:01:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:39.407 00:06:39.407 real 0m16.937s 00:06:39.407 user 0m37.792s 00:06:39.407 sys 0m2.103s 00:06:39.407 10:01:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.407 10:01:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.407 10:01:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:39.407 10:01:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:39.407 10:01:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.407 10:01:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.407 10:01:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.407 ************************************ 00:06:39.407 START TEST cpu_locks 00:06:39.407 ************************************ 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:39.407 * Looking for test storage... 00:06:39.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.407 10:01:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.407 --rc genhtml_branch_coverage=1 00:06:39.407 --rc genhtml_function_coverage=1 00:06:39.407 --rc genhtml_legend=1 00:06:39.407 --rc geninfo_all_blocks=1 00:06:39.407 --rc geninfo_unexecuted_blocks=1 00:06:39.407 00:06:39.407 ' 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.407 --rc genhtml_branch_coverage=1 00:06:39.407 --rc genhtml_function_coverage=1 00:06:39.407 --rc genhtml_legend=1 00:06:39.407 --rc geninfo_all_blocks=1 00:06:39.407 --rc geninfo_unexecuted_blocks=1 00:06:39.407 00:06:39.407 ' 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.407 --rc genhtml_branch_coverage=1 00:06:39.407 --rc genhtml_function_coverage=1 00:06:39.407 --rc genhtml_legend=1 00:06:39.407 --rc geninfo_all_blocks=1 00:06:39.407 --rc geninfo_unexecuted_blocks=1 00:06:39.407 00:06:39.407 ' 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.407 --rc genhtml_branch_coverage=1 00:06:39.407 --rc genhtml_function_coverage=1 00:06:39.407 --rc genhtml_legend=1 00:06:39.407 --rc geninfo_all_blocks=1 00:06:39.407 --rc geninfo_unexecuted_blocks=1 00:06:39.407 00:06:39.407 ' 00:06:39.407 10:01:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:39.407 10:01:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:39.407 10:01:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:39.407 10:01:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.407 10:01:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.407 ************************************ 00:06:39.407 START TEST default_locks 00:06:39.407 ************************************ 00:06:39.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71079 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71079 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71079 ']' 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.407 10:01:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.667 [2024-11-03 10:01:07.820605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:39.667 [2024-11-03 10:01:07.820863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71079 ] 00:06:39.667 [2024-11-03 10:01:07.956147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.667 [2024-11-03 10:01:07.989475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71079 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71079 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71079 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71079 ']' 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71079 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:40.599 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.600 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71079 00:06:40.600 killing process with pid 71079 00:06:40.600 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.600 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.600 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71079' 00:06:40.600 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71079 00:06:40.600 10:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71079 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71079 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71079 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.859 ERROR: process (pid: 71079) is no longer running 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71079 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71079 ']' 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.859 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71079) - No such process 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.859 ************************************ 00:06:40.859 END TEST default_locks 00:06:40.859 ************************************ 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.859 00:06:40.859 real 0m1.378s 00:06:40.859 user 0m1.432s 00:06:40.859 sys 0m0.380s 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.859 10:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.859 10:01:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.859 10:01:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.859 10:01:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.859 10:01:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.859 ************************************ 00:06:40.859 START TEST default_locks_via_rpc 00:06:40.859 ************************************ 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71121 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71121 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71121 ']' 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.859 10:01:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.119 [2024-11-03 10:01:09.263110] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:41.120 [2024-11-03 10:01:09.263242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71121 ] 00:06:41.120 [2024-11-03 10:01:09.389812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.120 [2024-11-03 10:01:09.422933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71121 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71121 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71121 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71121 ']' 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71121 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71121 00:06:42.061 killing process with pid 71121 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71121' 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71121 00:06:42.061 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71121 00:06:42.319 00:06:42.319 real 0m1.469s 00:06:42.319 user 0m1.518s 00:06:42.319 sys 0m0.407s 00:06:42.319 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.319 ************************************ 00:06:42.319 END TEST default_locks_via_rpc 00:06:42.319 ************************************ 00:06:42.319 10:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.579 10:01:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:42.579 10:01:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.579 10:01:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.579 10:01:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.579 ************************************ 00:06:42.579 START TEST non_locking_app_on_locked_coremask 00:06:42.579 ************************************ 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:42.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71173 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71173 /var/tmp/spdk.sock 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71173 ']' 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.579 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.580 10:01:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.580 [2024-11-03 10:01:10.788119] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.580 [2024-11-03 10:01:10.788254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71173 ] 00:06:42.580 [2024-11-03 10:01:10.922458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.842 [2024-11-03 10:01:10.951092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71183 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71183 /var/tmp/spdk2.sock 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71183 ']' 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.408 10:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.408 [2024-11-03 10:01:11.676405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:43.408 [2024-11-03 10:01:11.676962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71183 ] 00:06:43.667 [2024-11-03 10:01:11.808446] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.667 [2024-11-03 10:01:11.808478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.667 [2024-11-03 10:01:11.864629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.232 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.232 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:44.233 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71173 00:06:44.233 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71173 00:06:44.233 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71173 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71173 ']' 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71173 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71173 00:06:44.839 killing process with pid 71173 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71173' 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71173 00:06:44.839 10:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71173 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71183 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71183 ']' 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71183 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71183 00:06:45.110 killing process with pid 71183 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71183' 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71183 00:06:45.110 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71183 00:06:45.371 ************************************ 00:06:45.371 END TEST non_locking_app_on_locked_coremask 00:06:45.371 ************************************ 00:06:45.371 00:06:45.371 real 0m2.862s 00:06:45.371 user 0m3.208s 00:06:45.371 sys 0m0.722s 00:06:45.371 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.371 10:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.371 10:01:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.371 10:01:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.371 10:01:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.371 10:01:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.371 ************************************ 00:06:45.371 START TEST locking_app_on_unlocked_coremask 00:06:45.371 ************************************ 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71240 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71240 /var/tmp/spdk.sock 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71240 ']' 00:06:45.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.371 10:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.371 [2024-11-03 10:01:13.707355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:45.371 [2024-11-03 10:01:13.707596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:06:45.632 [2024-11-03 10:01:13.840879] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.632 [2024-11-03 10:01:13.841000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.632 [2024-11-03 10:01:13.869154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71252 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71252 /var/tmp/spdk2.sock 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71252 ']' 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.204 10:01:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.204 [2024-11-03 10:01:14.563571] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:46.204 [2024-11-03 10:01:14.563884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71252 ] 00:06:46.465 [2024-11-03 10:01:14.694871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.465 [2024-11-03 10:01:14.751169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.404 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.404 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.404 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71252 00:06:47.404 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71252 00:06:47.404 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71240 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71240 ']' 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71240 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71240 00:06:47.664 killing process with pid 71240 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71240' 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71240 00:06:47.664 10:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71240 00:06:47.925 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71252 00:06:47.925 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71252 ']' 00:06:47.925 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71252 00:06:47.925 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.925 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.925 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71252 00:06:48.185 killing process with pid 71252 00:06:48.185 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.185 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.185 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71252' 00:06:48.185 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71252 00:06:48.185 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71252 00:06:48.185 ************************************ 00:06:48.185 END TEST locking_app_on_unlocked_coremask 00:06:48.185 ************************************ 00:06:48.185 00:06:48.185 real 0m2.876s 00:06:48.185 user 0m3.151s 00:06:48.185 sys 0m0.779s 00:06:48.185 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.185 10:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.446 10:01:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:48.446 10:01:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.446 10:01:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.446 10:01:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.446 ************************************ 00:06:48.446 START TEST locking_app_on_locked_coremask 00:06:48.446 ************************************ 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:48.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71310 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71310 /var/tmp/spdk.sock 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71310 ']' 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.446 10:01:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.446 [2024-11-03 10:01:16.639861] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:48.446 [2024-11-03 10:01:16.640184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71310 ] 00:06:48.446 [2024-11-03 10:01:16.773102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.707 [2024-11-03 10:01:16.824501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71326 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71326 /var/tmp/spdk2.sock 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71326 /var/tmp/spdk2.sock 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71326 /var/tmp/spdk2.sock 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71326 ']' 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.280 10:01:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.280 [2024-11-03 10:01:17.570093] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:49.280 [2024-11-03 10:01:17.570531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71326 ] 00:06:49.540 [2024-11-03 10:01:17.711374] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71310 has claimed it. 00:06:49.540 [2024-11-03 10:01:17.711461] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.108 ERROR: process (pid: 71326) is no longer running 00:06:50.108 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71326) - No such process 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71310 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71310 00:06:50.108 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71310 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71310 ']' 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71310 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71310 00:06:50.368 killing process with pid 71310 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71310' 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71310 00:06:50.368 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71310 00:06:50.629 ************************************ 00:06:50.629 END TEST locking_app_on_locked_coremask 00:06:50.629 ************************************ 00:06:50.629 00:06:50.629 real 0m2.167s 00:06:50.629 user 0m2.404s 00:06:50.629 sys 0m0.568s 00:06:50.629 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.629 10:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.629 10:01:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:50.629 10:01:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.629 10:01:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.629 10:01:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.629 ************************************ 00:06:50.629 START TEST locking_overlapped_coremask 00:06:50.629 ************************************ 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71368 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71368 /var/tmp/spdk.sock 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71368 ']' 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.629 10:01:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.629 [2024-11-03 10:01:18.863974] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:50.629 [2024-11-03 10:01:18.864087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71368 ] 00:06:50.890 [2024-11-03 10:01:18.998940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.890 [2024-11-03 10:01:19.037099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.890 [2024-11-03 10:01:19.037431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.890 [2024-11-03 10:01:19.037466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71386 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71386 /var/tmp/spdk2.sock 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71386 /var/tmp/spdk2.sock 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71386 /var/tmp/spdk2.sock 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71386 ']' 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.461 10:01:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.461 [2024-11-03 10:01:19.728389] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:51.461 [2024-11-03 10:01:19.728513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71386 ] 00:06:51.720 [2024-11-03 10:01:19.869393] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71368 has claimed it. 00:06:51.720 [2024-11-03 10:01:19.869448] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.980 ERROR: process (pid: 71386) is no longer running 00:06:51.980 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71386) - No such process 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71368 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71368 ']' 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71368 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.980 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71368 00:06:52.241 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.241 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.241 killing process with pid 71368 00:06:52.241 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71368' 00:06:52.241 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71368 00:06:52.241 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71368 00:06:52.241 00:06:52.241 real 0m1.788s 00:06:52.241 user 0m4.870s 00:06:52.241 sys 0m0.355s 00:06:52.241 ************************************ 00:06:52.241 END TEST locking_overlapped_coremask 00:06:52.241 ************************************ 00:06:52.241 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.241 10:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.502 10:01:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:52.502 10:01:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.502 10:01:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.502 10:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.502 ************************************ 00:06:52.502 START TEST locking_overlapped_coremask_via_rpc 00:06:52.502 ************************************ 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71428 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71428 /var/tmp/spdk.sock 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71428 ']' 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.502 10:01:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.502 [2024-11-03 10:01:20.716119] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:52.502 [2024-11-03 10:01:20.716269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71428 ] 00:06:52.502 [2024-11-03 10:01:20.851576] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.502 [2024-11-03 10:01:20.851634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.791 [2024-11-03 10:01:20.907350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.791 [2024-11-03 10:01:20.907805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.791 [2024-11-03 10:01:20.907903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71446 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71446 /var/tmp/spdk2.sock 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71446 ']' 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.357 10:01:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.357 [2024-11-03 10:01:21.647335] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:53.357 [2024-11-03 10:01:21.647461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71446 ] 00:06:53.616 [2024-11-03 10:01:21.788596] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.616 [2024-11-03 10:01:21.788642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.616 [2024-11-03 10:01:21.854035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.616 [2024-11-03 10:01:21.857327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.616 [2024-11-03 10:01:21.857402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.187 [2024-11-03 10:01:22.467365] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71428 has claimed it. 00:06:54.187 request: 00:06:54.187 { 00:06:54.187 "method": "framework_enable_cpumask_locks", 00:06:54.187 "req_id": 1 00:06:54.187 } 00:06:54.187 Got JSON-RPC error response 00:06:54.187 response: 00:06:54.187 { 00:06:54.187 "code": -32603, 00:06:54.187 "message": "Failed to claim CPU core: 2" 00:06:54.187 } 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71428 /var/tmp/spdk.sock 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71428 ']' 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.187 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:54.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71446 /var/tmp/spdk2.sock 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71446 ']' 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.448 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.709 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.709 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:54.709 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:54.709 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.709 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.709 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.709 00:06:54.709 real 0m2.203s 00:06:54.709 user 0m1.001s 00:06:54.709 sys 0m0.129s 00:06:54.709 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.709 ************************************ 00:06:54.709 END TEST locking_overlapped_coremask_via_rpc 00:06:54.709 ************************************ 00:06:54.709 10:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.709 10:01:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:54.709 10:01:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71428 ]] 00:06:54.709 10:01:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71428 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71428 ']' 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71428 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71428 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71428' 00:06:54.709 killing process with pid 71428 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71428 00:06:54.709 10:01:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71428 00:06:54.970 10:01:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71446 ]] 00:06:54.970 10:01:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71446 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71446 ']' 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71446 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71446 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:54.970 killing process with pid 71446 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71446' 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71446 00:06:54.970 10:01:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71446 00:06:55.230 10:01:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.230 10:01:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.230 10:01:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71428 ]] 00:06:55.230 Process with pid 71428 is not found 00:06:55.230 10:01:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71428 00:06:55.230 10:01:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71428 ']' 00:06:55.230 10:01:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71428 00:06:55.230 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71428) - No such process 00:06:55.230 10:01:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71428 is not found' 00:06:55.230 10:01:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71446 ]] 00:06:55.230 10:01:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71446 00:06:55.230 Process with pid 71446 is not found 00:06:55.230 10:01:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71446 ']' 00:06:55.230 10:01:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71446 00:06:55.230 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71446) - No such process 00:06:55.230 10:01:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71446 is not found' 00:06:55.230 10:01:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.230 00:06:55.230 real 0m15.850s 00:06:55.230 user 0m27.465s 00:06:55.230 sys 0m4.125s 00:06:55.230 ************************************ 00:06:55.230 END TEST cpu_locks 00:06:55.230 ************************************ 00:06:55.230 10:01:23 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.230 10:01:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.230 00:06:55.230 real 0m42.080s 00:06:55.230 user 1m21.794s 00:06:55.230 sys 0m6.964s 00:06:55.230 ************************************ 00:06:55.230 END TEST event 00:06:55.230 10:01:23 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.230 10:01:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.230 ************************************ 00:06:55.230 10:01:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.230 10:01:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.230 10:01:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.230 10:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:55.230 ************************************ 00:06:55.230 START TEST thread 00:06:55.230 ************************************ 00:06:55.230 10:01:23 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.492 * Looking for test storage... 00:06:55.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.492 10:01:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.492 10:01:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.492 10:01:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.492 10:01:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.492 10:01:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.492 10:01:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.492 10:01:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.492 10:01:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.492 10:01:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.492 10:01:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.492 10:01:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.492 10:01:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:55.492 10:01:23 thread -- scripts/common.sh@345 -- # : 1 00:06:55.492 10:01:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.492 10:01:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.492 10:01:23 thread -- scripts/common.sh@365 -- # decimal 1 00:06:55.492 10:01:23 thread -- scripts/common.sh@353 -- # local d=1 00:06:55.492 10:01:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.492 10:01:23 thread -- scripts/common.sh@355 -- # echo 1 00:06:55.492 10:01:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.492 10:01:23 thread -- scripts/common.sh@366 -- # decimal 2 00:06:55.492 10:01:23 thread -- scripts/common.sh@353 -- # local d=2 00:06:55.492 10:01:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.492 10:01:23 thread -- scripts/common.sh@355 -- # echo 2 00:06:55.492 10:01:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.492 10:01:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.492 10:01:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.492 10:01:23 thread -- scripts/common.sh@368 -- # return 0 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.492 --rc genhtml_branch_coverage=1 00:06:55.492 --rc genhtml_function_coverage=1 00:06:55.492 --rc genhtml_legend=1 00:06:55.492 --rc geninfo_all_blocks=1 00:06:55.492 --rc geninfo_unexecuted_blocks=1 00:06:55.492 00:06:55.492 ' 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.492 --rc genhtml_branch_coverage=1 00:06:55.492 --rc genhtml_function_coverage=1 00:06:55.492 --rc genhtml_legend=1 00:06:55.492 --rc geninfo_all_blocks=1 00:06:55.492 --rc geninfo_unexecuted_blocks=1 00:06:55.492 00:06:55.492 ' 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.492 --rc genhtml_branch_coverage=1 00:06:55.492 --rc genhtml_function_coverage=1 00:06:55.492 --rc genhtml_legend=1 00:06:55.492 --rc geninfo_all_blocks=1 00:06:55.492 --rc geninfo_unexecuted_blocks=1 00:06:55.492 00:06:55.492 ' 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.492 --rc genhtml_branch_coverage=1 00:06:55.492 --rc genhtml_function_coverage=1 00:06:55.492 --rc genhtml_legend=1 00:06:55.492 --rc geninfo_all_blocks=1 00:06:55.492 --rc geninfo_unexecuted_blocks=1 00:06:55.492 00:06:55.492 ' 00:06:55.492 10:01:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.492 10:01:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.492 ************************************ 00:06:55.492 START TEST thread_poller_perf 00:06:55.492 ************************************ 00:06:55.492 10:01:23 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.492 [2024-11-03 10:01:23.719792] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.492 [2024-11-03 10:01:23.720243] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71573 ] 00:06:55.753 [2024-11-03 10:01:23.857111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.753 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.753 [2024-11-03 10:01:23.921047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.694 [2024-11-03T10:01:25.056Z] ====================================== 00:06:56.694 [2024-11-03T10:01:25.056Z] busy:2616569216 (cyc) 00:06:56.694 [2024-11-03T10:01:25.056Z] total_run_count: 307000 00:06:56.694 [2024-11-03T10:01:25.056Z] tsc_hz: 2600000000 (cyc) 00:06:56.694 [2024-11-03T10:01:25.056Z] ====================================== 00:06:56.694 [2024-11-03T10:01:25.056Z] poller_cost: 8523 (cyc), 3278 (nsec) 00:06:56.694 00:06:56.694 real 0m1.317s 00:06:56.694 user 0m1.141s 00:06:56.694 sys 0m0.069s 00:06:56.694 ************************************ 00:06:56.694 END TEST thread_poller_perf 00:06:56.694 ************************************ 00:06:56.694 10:01:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.694 10:01:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.955 10:01:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.955 10:01:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:56.955 10:01:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.955 10:01:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.955 ************************************ 00:06:56.955 START TEST thread_poller_perf 00:06:56.955 ************************************ 00:06:56.955 10:01:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.955 [2024-11-03 10:01:25.091983] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:56.955 [2024-11-03 10:01:25.092117] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71615 ] 00:06:56.955 [2024-11-03 10:01:25.226810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.955 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.955 [2024-11-03 10:01:25.260368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.341 [2024-11-03T10:01:26.703Z] ====================================== 00:06:58.341 [2024-11-03T10:01:26.703Z] busy:2603062352 (cyc) 00:06:58.341 [2024-11-03T10:01:26.703Z] total_run_count: 3963000 00:06:58.341 [2024-11-03T10:01:26.703Z] tsc_hz: 2600000000 (cyc) 00:06:58.341 [2024-11-03T10:01:26.703Z] ====================================== 00:06:58.341 [2024-11-03T10:01:26.703Z] poller_cost: 656 (cyc), 252 (nsec) 00:06:58.341 ************************************ 00:06:58.341 END TEST thread_poller_perf 00:06:58.341 ************************************ 00:06:58.341 00:06:58.341 real 0m1.278s 00:06:58.341 user 0m1.111s 00:06:58.341 sys 0m0.060s 00:06:58.341 10:01:26 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.341 10:01:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.341 10:01:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:58.341 ************************************ 00:06:58.341 END TEST thread 00:06:58.341 ************************************ 00:06:58.341 00:06:58.341 real 0m2.851s 00:06:58.341 user 0m2.365s 00:06:58.341 sys 0m0.251s 00:06:58.341 10:01:26 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.341 10:01:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.341 10:01:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:58.341 10:01:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:58.341 10:01:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.341 10:01:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.341 10:01:26 -- common/autotest_common.sh@10 -- # set +x 00:06:58.341 ************************************ 00:06:58.341 START TEST app_cmdline 00:06:58.341 ************************************ 00:06:58.341 10:01:26 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:58.341 * Looking for test storage... 00:06:58.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:58.341 10:01:26 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:58.341 10:01:26 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.341 10:01:26 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:58.341 10:01:26 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:58.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.341 10:01:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:58.341 10:01:26 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.341 10:01:26 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.342 --rc genhtml_branch_coverage=1 00:06:58.342 --rc genhtml_function_coverage=1 00:06:58.342 --rc genhtml_legend=1 00:06:58.342 --rc geninfo_all_blocks=1 00:06:58.342 --rc geninfo_unexecuted_blocks=1 00:06:58.342 00:06:58.342 ' 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.342 --rc genhtml_branch_coverage=1 00:06:58.342 --rc genhtml_function_coverage=1 00:06:58.342 --rc genhtml_legend=1 00:06:58.342 --rc geninfo_all_blocks=1 00:06:58.342 --rc geninfo_unexecuted_blocks=1 00:06:58.342 00:06:58.342 ' 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.342 --rc genhtml_branch_coverage=1 00:06:58.342 --rc genhtml_function_coverage=1 00:06:58.342 --rc genhtml_legend=1 00:06:58.342 --rc geninfo_all_blocks=1 00:06:58.342 --rc geninfo_unexecuted_blocks=1 00:06:58.342 00:06:58.342 ' 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.342 --rc genhtml_branch_coverage=1 00:06:58.342 --rc genhtml_function_coverage=1 00:06:58.342 --rc genhtml_legend=1 00:06:58.342 --rc geninfo_all_blocks=1 00:06:58.342 --rc geninfo_unexecuted_blocks=1 00:06:58.342 00:06:58.342 ' 00:06:58.342 10:01:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:58.342 10:01:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71693 00:06:58.342 10:01:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71693 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71693 ']' 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.342 10:01:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.342 10:01:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.342 [2024-11-03 10:01:26.685317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.342 [2024-11-03 10:01:26.685948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71693 ] 00:06:58.603 [2024-11-03 10:01:26.832564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.603 [2024-11-03 10:01:26.882181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:59.548 { 00:06:59.548 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:59.548 "fields": { 00:06:59.548 "major": 24, 00:06:59.548 "minor": 9, 00:06:59.548 "patch": 1, 00:06:59.548 "suffix": "-pre", 00:06:59.548 "commit": "b18e1bd62" 00:06:59.548 } 00:06:59.548 } 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.548 10:01:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:59.548 10:01:27 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.809 request: 00:06:59.809 { 00:06:59.809 "method": "env_dpdk_get_mem_stats", 00:06:59.809 "req_id": 1 00:06:59.809 } 00:06:59.809 Got JSON-RPC error response 00:06:59.809 response: 00:06:59.809 { 00:06:59.809 "code": -32601, 00:06:59.809 "message": "Method not found" 00:06:59.809 } 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.809 10:01:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71693 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71693 ']' 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71693 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71693 00:06:59.809 killing process with pid 71693 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71693' 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@969 -- # kill 71693 00:06:59.809 10:01:27 app_cmdline -- common/autotest_common.sh@974 -- # wait 71693 00:07:00.070 ************************************ 00:07:00.070 END TEST app_cmdline 00:07:00.070 ************************************ 00:07:00.070 00:07:00.070 real 0m1.878s 00:07:00.070 user 0m2.105s 00:07:00.070 sys 0m0.495s 00:07:00.070 10:01:28 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.070 10:01:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.070 10:01:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.070 10:01:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.070 10:01:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.070 10:01:28 -- common/autotest_common.sh@10 -- # set +x 00:07:00.070 ************************************ 00:07:00.070 START TEST version 00:07:00.070 ************************************ 00:07:00.070 10:01:28 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.367 * Looking for test storage... 00:07:00.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.367 10:01:28 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.367 10:01:28 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.367 10:01:28 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.367 10:01:28 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.367 10:01:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.368 10:01:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.368 10:01:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.368 10:01:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.368 10:01:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.368 10:01:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.368 10:01:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.368 10:01:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.368 10:01:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.368 10:01:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.368 10:01:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.368 10:01:28 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.368 10:01:28 version -- scripts/common.sh@345 -- # : 1 00:07:00.368 10:01:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.368 10:01:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.368 10:01:28 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.368 10:01:28 version -- scripts/common.sh@353 -- # local d=1 00:07:00.368 10:01:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.368 10:01:28 version -- scripts/common.sh@355 -- # echo 1 00:07:00.368 10:01:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.368 10:01:28 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.368 10:01:28 version -- scripts/common.sh@353 -- # local d=2 00:07:00.368 10:01:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.368 10:01:28 version -- scripts/common.sh@355 -- # echo 2 00:07:00.368 10:01:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.368 10:01:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.368 10:01:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.368 10:01:28 version -- scripts/common.sh@368 -- # return 0 00:07:00.368 10:01:28 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.368 10:01:28 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.368 --rc genhtml_branch_coverage=1 00:07:00.368 --rc genhtml_function_coverage=1 00:07:00.368 --rc genhtml_legend=1 00:07:00.368 --rc geninfo_all_blocks=1 00:07:00.368 --rc geninfo_unexecuted_blocks=1 00:07:00.368 00:07:00.368 ' 00:07:00.368 10:01:28 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.368 --rc genhtml_branch_coverage=1 00:07:00.368 --rc genhtml_function_coverage=1 00:07:00.368 --rc genhtml_legend=1 00:07:00.368 --rc geninfo_all_blocks=1 00:07:00.368 --rc geninfo_unexecuted_blocks=1 00:07:00.368 00:07:00.368 ' 00:07:00.368 10:01:28 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.368 --rc genhtml_branch_coverage=1 00:07:00.368 --rc genhtml_function_coverage=1 00:07:00.368 --rc genhtml_legend=1 00:07:00.368 --rc geninfo_all_blocks=1 00:07:00.368 --rc geninfo_unexecuted_blocks=1 00:07:00.368 00:07:00.368 ' 00:07:00.368 10:01:28 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.368 --rc genhtml_branch_coverage=1 00:07:00.368 --rc genhtml_function_coverage=1 00:07:00.368 --rc genhtml_legend=1 00:07:00.368 --rc geninfo_all_blocks=1 00:07:00.368 --rc geninfo_unexecuted_blocks=1 00:07:00.368 00:07:00.368 ' 00:07:00.368 10:01:28 version -- app/version.sh@17 -- # get_header_version major 00:07:00.368 10:01:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.368 10:01:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.368 10:01:28 version -- app/version.sh@14 -- # cut -f2 00:07:00.368 10:01:28 version -- app/version.sh@17 -- # major=24 00:07:00.368 10:01:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.368 10:01:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.368 10:01:28 version -- app/version.sh@14 -- # cut -f2 00:07:00.368 10:01:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.368 10:01:28 version -- app/version.sh@18 -- # minor=9 00:07:00.368 10:01:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.368 10:01:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.368 10:01:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.368 10:01:28 version -- app/version.sh@14 -- # cut -f2 00:07:00.368 10:01:28 version -- app/version.sh@19 -- # patch=1 00:07:00.368 10:01:28 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.368 10:01:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.368 10:01:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.368 10:01:28 version -- app/version.sh@14 -- # cut -f2 00:07:00.368 10:01:28 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.368 10:01:28 version -- app/version.sh@22 -- # version=24.9 00:07:00.368 10:01:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.368 10:01:28 version -- app/version.sh@25 -- # version=24.9.1 00:07:00.368 10:01:28 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:00.368 10:01:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:00.368 10:01:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.368 10:01:28 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:00.368 10:01:28 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:00.368 00:07:00.368 real 0m0.210s 00:07:00.368 user 0m0.133s 00:07:00.368 sys 0m0.099s 00:07:00.368 ************************************ 00:07:00.368 END TEST version 00:07:00.368 ************************************ 00:07:00.368 10:01:28 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.368 10:01:28 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.368 10:01:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:00.368 10:01:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:00.368 10:01:28 -- spdk/autotest.sh@194 -- # uname -s 00:07:00.368 10:01:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:00.368 10:01:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.368 10:01:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.368 10:01:28 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:00.368 10:01:28 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:00.368 10:01:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:00.368 10:01:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.368 10:01:28 -- common/autotest_common.sh@10 -- # set +x 00:07:00.368 ************************************ 00:07:00.368 START TEST blockdev_nvme 00:07:00.368 ************************************ 00:07:00.368 10:01:28 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:00.650 * Looking for test storage... 00:07:00.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.650 10:01:28 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.650 --rc genhtml_branch_coverage=1 00:07:00.650 --rc genhtml_function_coverage=1 00:07:00.650 --rc genhtml_legend=1 00:07:00.650 --rc geninfo_all_blocks=1 00:07:00.650 --rc geninfo_unexecuted_blocks=1 00:07:00.650 00:07:00.650 ' 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.650 --rc genhtml_branch_coverage=1 00:07:00.650 --rc genhtml_function_coverage=1 00:07:00.650 --rc genhtml_legend=1 00:07:00.650 --rc geninfo_all_blocks=1 00:07:00.650 --rc geninfo_unexecuted_blocks=1 00:07:00.650 00:07:00.650 ' 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.650 --rc genhtml_branch_coverage=1 00:07:00.650 --rc genhtml_function_coverage=1 00:07:00.650 --rc genhtml_legend=1 00:07:00.650 --rc geninfo_all_blocks=1 00:07:00.650 --rc geninfo_unexecuted_blocks=1 00:07:00.650 00:07:00.650 ' 00:07:00.650 10:01:28 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.650 --rc genhtml_branch_coverage=1 00:07:00.650 --rc genhtml_function_coverage=1 00:07:00.650 --rc genhtml_legend=1 00:07:00.650 --rc geninfo_all_blocks=1 00:07:00.650 --rc geninfo_unexecuted_blocks=1 00:07:00.650 00:07:00.650 ' 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.650 10:01:28 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:00.650 10:01:28 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71854 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71854 00:07:00.651 10:01:28 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 71854 ']' 00:07:00.651 10:01:28 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:00.651 10:01:28 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.651 10:01:28 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.651 10:01:28 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.651 10:01:28 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.651 10:01:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.651 [2024-11-03 10:01:28.910698] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:00.651 [2024-11-03 10:01:28.911082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71854 ] 00:07:00.911 [2024-11-03 10:01:29.048802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.911 [2024-11-03 10:01:29.101364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.482 10:01:29 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.482 10:01:29 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:01.482 10:01:29 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:01.482 10:01:29 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:01.482 10:01:29 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:01.482 10:01:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:01.482 10:01:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.482 10:01:29 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:01.482 10:01:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.482 10:01:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.054 10:01:30 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.054 10:01:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:02.054 10:01:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.054 10:01:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.054 10:01:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "98ceeb08-2b2b-4777-85e0-60957c9da06a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "98ceeb08-2b2b-4777-85e0-60957c9da06a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "67e4ee88-6473-487f-a3ae-f307956af6f2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "67e4ee88-6473-487f-a3ae-f307956af6f2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5b25e02f-1ecd-4b61-82c3-97b56aed74ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b25e02f-1ecd-4b61-82c3-97b56aed74ed",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "081a7a1c-2545-45a0-9ceb-a1cd9aeca228"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "081a7a1c-2545-45a0-9ceb-a1cd9aeca228",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "374913f9-a2b7-49e2-a931-23a9605225d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "374913f9-a2b7-49e2-a931-23a9605225d8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "59c499c4-25f8-4937-9a2d-149b14ad91ad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "59c499c4-25f8-4937-9a2d-149b14ad91ad",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:02.055 10:01:30 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71854 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 71854 ']' 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 71854 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71854 00:07:02.055 killing process with pid 71854 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71854' 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 71854 00:07:02.055 10:01:30 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 71854 00:07:02.317 10:01:30 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:02.317 10:01:30 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:02.317 10:01:30 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:02.317 10:01:30 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.317 10:01:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.317 ************************************ 00:07:02.317 START TEST bdev_hello_world 00:07:02.317 ************************************ 00:07:02.317 10:01:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:02.578 [2024-11-03 10:01:30.683276] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:02.578 [2024-11-03 10:01:30.683603] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71927 ] 00:07:02.578 [2024-11-03 10:01:30.820889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.578 [2024-11-03 10:01:30.857285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.150 [2024-11-03 10:01:31.253262] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:03.150 [2024-11-03 10:01:31.253322] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:03.150 [2024-11-03 10:01:31.253359] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:03.150 [2024-11-03 10:01:31.255739] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:03.151 [2024-11-03 10:01:31.256642] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:03.151 [2024-11-03 10:01:31.256689] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:03.151 [2024-11-03 10:01:31.257255] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:03.151 00:07:03.151 [2024-11-03 10:01:31.257293] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:03.151 ************************************ 00:07:03.151 END TEST bdev_hello_world 00:07:03.151 ************************************ 00:07:03.151 00:07:03.151 real 0m0.838s 00:07:03.151 user 0m0.536s 00:07:03.151 sys 0m0.195s 00:07:03.151 10:01:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.151 10:01:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:03.151 10:01:31 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:03.151 10:01:31 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.151 10:01:31 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.151 10:01:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.412 ************************************ 00:07:03.412 START TEST bdev_bounds 00:07:03.412 ************************************ 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:03.412 Process bdevio pid: 71958 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71958 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71958' 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71958 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:03.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71958 ']' 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.412 10:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:03.412 [2024-11-03 10:01:31.586303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:03.412 [2024-11-03 10:01:31.586458] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71958 ] 00:07:03.412 [2024-11-03 10:01:31.724702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.673 [2024-11-03 10:01:31.779128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.673 [2024-11-03 10:01:31.779477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.673 [2024-11-03 10:01:31.779520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.246 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.246 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:04.246 10:01:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:04.246 I/O targets: 00:07:04.246 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:04.246 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:04.246 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:04.246 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:04.246 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:04.246 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:04.246 00:07:04.246 00:07:04.246 CUnit - A unit testing framework for C - Version 2.1-3 00:07:04.246 http://cunit.sourceforge.net/ 00:07:04.246 00:07:04.246 00:07:04.246 Suite: bdevio tests on: Nvme3n1 00:07:04.246 Test: blockdev write read block ...passed 00:07:04.246 Test: blockdev write zeroes read block ...passed 00:07:04.246 Test: blockdev write zeroes read no split ...passed 00:07:04.246 Test: blockdev write zeroes read split ...passed 00:07:04.246 Test: blockdev write zeroes read split partial ...passed 00:07:04.246 Test: blockdev reset ...[2024-11-03 10:01:32.570319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:04.246 [2024-11-03 10:01:32.575546] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:04.246 passed 00:07:04.246 Test: blockdev write read 8 blocks ...passed 00:07:04.246 Test: blockdev write read size > 128k ...passed 00:07:04.246 Test: blockdev write read invalid size ...passed 00:07:04.246 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:04.246 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:04.246 Test: blockdev write read max offset ...passed 00:07:04.246 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:04.246 Test: blockdev writev readv 8 blocks ...passed 00:07:04.246 Test: blockdev writev readv 30 x 1block ...passed 00:07:04.246 Test: blockdev writev readv block ...passed 00:07:04.246 Test: blockdev writev readv size > 128k ...passed 00:07:04.246 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:04.246 Test: blockdev comparev and writev ...[2024-11-03 10:01:32.593911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c380a000 len:0x1000 00:07:04.246 [2024-11-03 10:01:32.593979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:04.246 passed 00:07:04.246 Test: blockdev nvme passthru rw ...passed 00:07:04.246 Test: blockdev nvme passthru vendor specific ...passed 00:07:04.246 Test: blockdev nvme admin passthru ...[2024-11-03 10:01:32.596355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:04.246 [2024-11-03 10:01:32.596412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:04.246 passed 00:07:04.246 Test: blockdev copy ...passed 00:07:04.246 Suite: bdevio tests on: Nvme2n3 00:07:04.246 Test: blockdev write read block ...passed 00:07:04.507 Test: blockdev write zeroes read block ...passed 00:07:04.507 Test: blockdev write zeroes read no split ...passed 00:07:04.507 Test: blockdev write zeroes read split ...passed 00:07:04.507 Test: blockdev write zeroes read split partial ...passed 00:07:04.507 Test: blockdev reset ...[2024-11-03 10:01:32.625001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:04.507 [2024-11-03 10:01:32.628888] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:04.507 passed 00:07:04.507 Test: blockdev write read 8 blocks ...passed 00:07:04.507 Test: blockdev write read size > 128k ...passed 00:07:04.507 Test: blockdev write read invalid size ...passed 00:07:04.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:04.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:04.507 Test: blockdev write read max offset ...passed 00:07:04.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:04.507 Test: blockdev writev readv 8 blocks ...passed 00:07:04.507 Test: blockdev writev readv 30 x 1block ...passed 00:07:04.507 Test: blockdev writev readv block ...passed 00:07:04.507 Test: blockdev writev readv size > 128k ...passed 00:07:04.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:04.507 Test: blockdev comparev and writev ...[2024-11-03 10:01:32.645784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3803000 len:0x1000 00:07:04.507 [2024-11-03 10:01:32.645841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:04.507 passed 00:07:04.507 Test: blockdev nvme passthru rw ...passed 00:07:04.507 Test: blockdev nvme passthru vendor specific ...passed 00:07:04.507 Test: blockdev nvme admin passthru ...[2024-11-03 10:01:32.648572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:04.507 [2024-11-03 10:01:32.648621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:04.507 passed 00:07:04.507 Test: blockdev copy ...passed 00:07:04.508 Suite: bdevio tests on: Nvme2n2 00:07:04.508 Test: blockdev write read block ...passed 00:07:04.508 Test: blockdev write zeroes read block ...passed 00:07:04.508 Test: blockdev write zeroes read no split ...passed 00:07:04.508 Test: blockdev write zeroes read split ...passed 00:07:04.508 Test: blockdev write zeroes read split partial ...passed 00:07:04.508 Test: blockdev reset ...[2024-11-03 10:01:32.679257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:04.508 [2024-11-03 10:01:32.682023] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:04.508 passed 00:07:04.508 Test: blockdev write read 8 blocks ...passed 00:07:04.508 Test: blockdev write read size > 128k ...passed 00:07:04.508 Test: blockdev write read invalid size ...passed 00:07:04.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:04.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:04.508 Test: blockdev write read max offset ...passed 00:07:04.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:04.508 Test: blockdev writev readv 8 blocks ...passed 00:07:04.508 Test: blockdev writev readv 30 x 1block ...passed 00:07:04.508 Test: blockdev writev readv block ...passed 00:07:04.508 Test: blockdev writev readv size > 128k ...passed 00:07:04.508 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:04.508 Test: blockdev comparev and writev ...[2024-11-03 10:01:32.690549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:07:04.508 Test: blockdev nvme passthru rw ...passed 00:07:04.508 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2c3803000 len:0x1000 00:07:04.508 [2024-11-03 10:01:32.690730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:04.508 passed 00:07:04.508 Test: blockdev nvme admin passthru ...[2024-11-03 10:01:32.691387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:04.508 [2024-11-03 10:01:32.691420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:04.508 passed 00:07:04.508 Test: blockdev copy ...passed 00:07:04.508 Suite: bdevio tests on: Nvme2n1 00:07:04.508 Test: blockdev write read block ...passed 00:07:04.508 Test: blockdev write zeroes read block ...passed 00:07:04.508 Test: blockdev write zeroes read no split ...passed 00:07:04.508 Test: blockdev write zeroes read split ...passed 00:07:04.508 Test: blockdev write zeroes read split partial ...passed 00:07:04.508 Test: blockdev reset ...[2024-11-03 10:01:32.723351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:04.508 [2024-11-03 10:01:32.725630] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:04.508 passed 00:07:04.508 Test: blockdev write read 8 blocks ...passed 00:07:04.508 Test: blockdev write read size > 128k ...passed 00:07:04.508 Test: blockdev write read invalid size ...passed 00:07:04.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:04.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:04.508 Test: blockdev write read max offset ...passed 00:07:04.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:04.508 Test: blockdev writev readv 8 blocks ...passed 00:07:04.508 Test: blockdev writev readv 30 x 1block ...passed 00:07:04.508 Test: blockdev writev readv block ...passed 00:07:04.508 Test: blockdev writev readv size > 128k ...passed 00:07:04.508 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:04.508 Test: blockdev comparev and writev ...[2024-11-03 10:01:32.731733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3803000 len:0x1000 00:07:04.508 [2024-11-03 10:01:32.731784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:04.508 passed 00:07:04.508 Test: blockdev nvme passthru rw ...passed 00:07:04.508 Test: blockdev nvme passthru vendor specific ...passed 00:07:04.508 Test: blockdev nvme admin passthru ...[2024-11-03 10:01:32.732315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:04.508 [2024-11-03 10:01:32.732350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:04.508 passed 00:07:04.508 Test: blockdev copy ...passed 00:07:04.508 Suite: bdevio tests on: Nvme1n1 00:07:04.508 Test: blockdev write read block ...passed 00:07:04.508 Test: blockdev write zeroes read block ...passed 00:07:04.508 Test: blockdev write zeroes read no split ...passed 00:07:04.508 Test: blockdev write zeroes read split ...passed 00:07:04.508 Test: blockdev write zeroes read split partial ...passed 00:07:04.508 Test: blockdev reset ...[2024-11-03 10:01:32.750280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:04.508 [2024-11-03 10:01:32.752880] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:04.508 passed 00:07:04.508 Test: blockdev write read 8 blocks ...passed 00:07:04.508 Test: blockdev write read size > 128k ...passed 00:07:04.508 Test: blockdev write read invalid size ...passed 00:07:04.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:04.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:04.508 Test: blockdev write read max offset ...passed 00:07:04.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:04.508 Test: blockdev writev readv 8 blocks ...passed 00:07:04.508 Test: blockdev writev readv 30 x 1block ...passed 00:07:04.508 Test: blockdev writev readv block ...passed 00:07:04.508 Test: blockdev writev readv size > 128k ...passed 00:07:04.508 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:04.508 Test: blockdev comparev and writev ...[2024-11-03 10:01:32.761478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:04.508 Test: blockdev nvme passthru rw ...passed 00:07:04.508 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2c3c36000 len:0x1000 00:07:04.508 [2024-11-03 10:01:32.761662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:04.508 passed 00:07:04.508 Test: blockdev nvme admin passthru ...[2024-11-03 10:01:32.762344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:04.508 [2024-11-03 10:01:32.762380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:04.508 passed 00:07:04.508 Test: blockdev copy ...passed 00:07:04.508 Suite: bdevio tests on: Nvme0n1 00:07:04.508 Test: blockdev write read block ...passed 00:07:04.508 Test: blockdev write zeroes read block ...passed 00:07:04.508 Test: blockdev write zeroes read no split ...passed 00:07:04.508 Test: blockdev write zeroes read split ...passed 00:07:04.508 Test: blockdev write zeroes read split partial ...passed 00:07:04.508 Test: blockdev reset ...[2024-11-03 10:01:32.793671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:04.508 [2024-11-03 10:01:32.795790] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:04.508 passed 00:07:04.508 Test: blockdev write read 8 blocks ...passed 00:07:04.508 Test: blockdev write read size > 128k ...passed 00:07:04.508 Test: blockdev write read invalid size ...passed 00:07:04.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:04.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:04.508 Test: blockdev write read max offset ...passed 00:07:04.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:04.508 Test: blockdev writev readv 8 blocks ...passed 00:07:04.508 Test: blockdev writev readv 30 x 1block ...passed 00:07:04.508 Test: blockdev writev readv block ...passed 00:07:04.508 Test: blockdev writev readv size > 128k ...passed 00:07:04.508 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:04.508 Test: blockdev comparev and writev ...passed 00:07:04.508 Test: blockdev nvme passthru rw ...[2024-11-03 10:01:32.802444] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:04.508 separate metadata which is not supported yet. 00:07:04.508 passed 00:07:04.508 Test: blockdev nvme passthru vendor specific ...passed 00:07:04.508 Test: blockdev nvme admin passthru ...[2024-11-03 10:01:32.803042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:04.508 [2024-11-03 10:01:32.803097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:04.508 passed 00:07:04.508 Test: blockdev copy ...passed 00:07:04.508 00:07:04.508 Run Summary: Type Total Ran Passed Failed Inactive 00:07:04.508 suites 6 6 n/a 0 0 00:07:04.508 tests 138 138 138 0 0 00:07:04.508 asserts 893 893 893 0 n/a 00:07:04.508 00:07:04.508 Elapsed time = 0.592 seconds 00:07:04.508 0 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71958 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71958 ']' 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71958 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71958 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71958' 00:07:04.508 killing process with pid 71958 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71958 00:07:04.508 10:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71958 00:07:04.770 10:01:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:04.770 00:07:04.770 real 0m1.520s 00:07:04.770 user 0m3.717s 00:07:04.770 sys 0m0.324s 00:07:04.770 10:01:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.770 ************************************ 00:07:04.770 END TEST bdev_bounds 00:07:04.770 10:01:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:04.770 ************************************ 00:07:04.770 10:01:33 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:04.770 10:01:33 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:04.770 10:01:33 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.770 10:01:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.770 ************************************ 00:07:04.770 START TEST bdev_nbd 00:07:04.770 ************************************ 00:07:04.770 10:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:04.770 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:04.770 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:04.770 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.770 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:04.770 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72001 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72001 /var/tmp/spdk-nbd.sock 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72001 ']' 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:04.771 10:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:05.032 [2024-11-03 10:01:33.182950] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:05.032 [2024-11-03 10:01:33.184009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.032 [2024-11-03 10:01:33.316867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.032 [2024-11-03 10:01:33.369336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.967 1+0 records in 00:07:05.967 1+0 records out 00:07:05.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347903 s, 11.8 MB/s 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.967 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.968 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.228 1+0 records in 00:07:06.228 1+0 records out 00:07:06.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384289 s, 10.7 MB/s 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.228 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.488 1+0 records in 00:07:06.488 1+0 records out 00:07:06.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390624 s, 10.5 MB/s 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.488 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.746 1+0 records in 00:07:06.746 1+0 records out 00:07:06.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662043 s, 6.2 MB/s 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.746 10:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:07.004 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:07.004 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:07.004 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.005 1+0 records in 00:07:07.005 1+0 records out 00:07:07.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974618 s, 4.2 MB/s 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:07.005 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.264 1+0 records in 00:07:07.264 1+0 records out 00:07:07.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388982 s, 10.5 MB/s 00:07:07.264 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd0", 00:07:07.265 "bdev_name": "Nvme0n1" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd1", 00:07:07.265 "bdev_name": "Nvme1n1" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd2", 00:07:07.265 "bdev_name": "Nvme2n1" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd3", 00:07:07.265 "bdev_name": "Nvme2n2" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd4", 00:07:07.265 "bdev_name": "Nvme2n3" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd5", 00:07:07.265 "bdev_name": "Nvme3n1" 00:07:07.265 } 00:07:07.265 ]' 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:07.265 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd0", 00:07:07.265 "bdev_name": "Nvme0n1" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd1", 00:07:07.265 "bdev_name": "Nvme1n1" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd2", 00:07:07.265 "bdev_name": "Nvme2n1" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd3", 00:07:07.265 "bdev_name": "Nvme2n2" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd4", 00:07:07.265 "bdev_name": "Nvme2n3" 00:07:07.265 }, 00:07:07.265 { 00:07:07.265 "nbd_device": "/dev/nbd5", 00:07:07.265 "bdev_name": "Nvme3n1" 00:07:07.265 } 00:07:07.265 ]' 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.528 10:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.788 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.049 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.311 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.572 10:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:08.833 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.834 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:09.095 /dev/nbd0 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.095 1+0 records in 00:07:09.095 1+0 records out 00:07:09.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154356 s, 2.7 MB/s 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.095 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:09.357 /dev/nbd1 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.357 1+0 records in 00:07:09.357 1+0 records out 00:07:09.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676419 s, 6.1 MB/s 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.357 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:09.618 /dev/nbd10 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.618 1+0 records in 00:07:09.618 1+0 records out 00:07:09.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904131 s, 4.5 MB/s 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.618 10:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:09.878 /dev/nbd11 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.878 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.879 1+0 records in 00:07:09.879 1+0 records out 00:07:09.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805948 s, 5.1 MB/s 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.879 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:10.139 /dev/nbd12 00:07:10.139 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:10.139 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:10.139 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.140 1+0 records in 00:07:10.140 1+0 records out 00:07:10.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108311 s, 3.8 MB/s 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:10.140 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:10.401 /dev/nbd13 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.401 1+0 records in 00:07:10.401 1+0 records out 00:07:10.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123026 s, 3.3 MB/s 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.401 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd0", 00:07:10.661 "bdev_name": "Nvme0n1" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd1", 00:07:10.661 "bdev_name": "Nvme1n1" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd10", 00:07:10.661 "bdev_name": "Nvme2n1" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd11", 00:07:10.661 "bdev_name": "Nvme2n2" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd12", 00:07:10.661 "bdev_name": "Nvme2n3" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd13", 00:07:10.661 "bdev_name": "Nvme3n1" 00:07:10.661 } 00:07:10.661 ]' 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd0", 00:07:10.661 "bdev_name": "Nvme0n1" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd1", 00:07:10.661 "bdev_name": "Nvme1n1" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd10", 00:07:10.661 "bdev_name": "Nvme2n1" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd11", 00:07:10.661 "bdev_name": "Nvme2n2" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd12", 00:07:10.661 "bdev_name": "Nvme2n3" 00:07:10.661 }, 00:07:10.661 { 00:07:10.661 "nbd_device": "/dev/nbd13", 00:07:10.661 "bdev_name": "Nvme3n1" 00:07:10.661 } 00:07:10.661 ]' 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.661 /dev/nbd1 00:07:10.661 /dev/nbd10 00:07:10.661 /dev/nbd11 00:07:10.661 /dev/nbd12 00:07:10.661 /dev/nbd13' 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.661 /dev/nbd1 00:07:10.661 /dev/nbd10 00:07:10.661 /dev/nbd11 00:07:10.661 /dev/nbd12 00:07:10.661 /dev/nbd13' 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:10.661 256+0 records in 00:07:10.661 256+0 records out 00:07:10.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660187 s, 159 MB/s 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.661 10:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.922 256+0 records in 00:07:10.922 256+0 records out 00:07:10.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10115 s, 10.4 MB/s 00:07:10.922 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.922 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.922 256+0 records in 00:07:10.922 256+0 records out 00:07:10.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10524 s, 10.0 MB/s 00:07:10.922 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.922 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:11.182 256+0 records in 00:07:11.182 256+0 records out 00:07:11.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218598 s, 4.8 MB/s 00:07:11.182 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.182 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:11.442 256+0 records in 00:07:11.442 256+0 records out 00:07:11.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205719 s, 5.1 MB/s 00:07:11.442 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.442 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:11.701 256+0 records in 00:07:11.701 256+0 records out 00:07:11.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239759 s, 4.4 MB/s 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:11.701 256+0 records in 00:07:11.701 256+0 records out 00:07:11.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121941 s, 8.6 MB/s 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:11.701 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.702 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:11.702 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.702 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:11.702 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.702 10:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.702 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.961 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.221 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.222 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.482 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.742 10:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.003 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.262 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.262 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.262 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.262 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.262 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.262 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.262 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:13.263 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:13.523 malloc_lvol_verify 00:07:13.523 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:13.784 29e182ab-e7aa-4ab9-8bfc-5c1bfc91224a 00:07:13.784 10:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:14.045 99b80276-4227-4f99-bad3-d7642ab8e4a9 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:14.045 /dev/nbd0 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:14.045 mke2fs 1.47.0 (5-Feb-2023) 00:07:14.045 Discarding device blocks: 0/4096 done 00:07:14.045 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:14.045 00:07:14.045 Allocating group tables: 0/1 done 00:07:14.045 Writing inode tables: 0/1 done 00:07:14.045 Creating journal (1024 blocks): done 00:07:14.045 Writing superblocks and filesystem accounting information: 0/1 done 00:07:14.045 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.045 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72001 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72001 ']' 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72001 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72001 00:07:14.306 killing process with pid 72001 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72001' 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72001 00:07:14.306 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72001 00:07:14.567 ************************************ 00:07:14.567 END TEST bdev_nbd 00:07:14.567 ************************************ 00:07:14.567 10:01:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:14.567 00:07:14.567 real 0m9.682s 00:07:14.567 user 0m13.699s 00:07:14.567 sys 0m3.279s 00:07:14.567 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.567 10:01:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:14.567 skipping fio tests on NVMe due to multi-ns failures. 00:07:14.567 10:01:42 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:14.567 10:01:42 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:14.567 10:01:42 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:14.567 10:01:42 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:14.567 10:01:42 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:14.567 10:01:42 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:14.567 10:01:42 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.567 10:01:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.567 ************************************ 00:07:14.567 START TEST bdev_verify 00:07:14.567 ************************************ 00:07:14.567 10:01:42 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:14.568 [2024-11-03 10:01:42.895097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:14.568 [2024-11-03 10:01:42.895192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72380 ] 00:07:14.834 [2024-11-03 10:01:43.021048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.834 [2024-11-03 10:01:43.056188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.834 [2024-11-03 10:01:43.056147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.161 Running I/O for 5 seconds... 00:07:17.490 19328.00 IOPS, 75.50 MiB/s [2024-11-03T10:01:46.794Z] 19936.00 IOPS, 77.88 MiB/s [2024-11-03T10:01:47.737Z] 19946.67 IOPS, 77.92 MiB/s [2024-11-03T10:01:48.677Z] 19936.00 IOPS, 77.88 MiB/s [2024-11-03T10:01:48.677Z] 20019.20 IOPS, 78.20 MiB/s 00:07:20.315 Latency(us) 00:07:20.315 [2024-11-03T10:01:48.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.315 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.315 Verification LBA range: start 0x0 length 0xbd0bd 00:07:20.315 Nvme0n1 : 5.03 1628.28 6.36 0.00 0.00 78373.19 14216.27 86709.17 00:07:20.315 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.315 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:20.315 Nvme0n1 : 5.04 1649.92 6.45 0.00 0.00 77326.18 13107.20 85499.27 00:07:20.315 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.315 Verification LBA range: start 0x0 length 0xa0000 00:07:20.315 Nvme1n1 : 5.06 1632.42 6.38 0.00 0.00 77772.99 7662.67 79046.50 00:07:20.315 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.315 Verification LBA range: start 0xa0000 length 0xa0000 00:07:20.315 Nvme1n1 : 5.04 1649.46 6.44 0.00 0.00 77226.00 13208.02 74610.22 00:07:20.315 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.315 Verification LBA range: start 0x0 length 0x80000 00:07:20.315 Nvme2n1 : 5.07 1639.97 6.41 0.00 0.00 77340.30 13107.20 61704.66 00:07:20.315 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.315 Verification LBA range: start 0x80000 length 0x80000 00:07:20.315 Nvme2n1 : 5.06 1655.72 6.47 0.00 0.00 76651.02 4713.55 65334.35 00:07:20.315 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.315 Verification LBA range: start 0x0 length 0x80000 00:07:20.315 Nvme2n2 : 5.08 1638.93 6.40 0.00 0.00 77224.76 14518.74 60494.77 00:07:20.315 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.315 Verification LBA range: start 0x80000 length 0x80000 00:07:20.316 Nvme2n2 : 5.07 1665.30 6.51 0.00 0.00 76139.39 6956.90 62107.96 00:07:20.316 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.316 Verification LBA range: start 0x0 length 0x80000 00:07:20.316 Nvme2n3 : 5.08 1637.93 6.40 0.00 0.00 77096.91 13308.85 61704.66 00:07:20.316 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.316 Verification LBA range: start 0x80000 length 0x80000 00:07:20.316 Nvme2n3 : 5.08 1664.26 6.50 0.00 0.00 76026.12 9124.63 64931.05 00:07:20.316 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.316 Verification LBA range: start 0x0 length 0x20000 00:07:20.316 Nvme3n1 : 5.09 1647.94 6.44 0.00 0.00 76554.25 3075.15 63317.86 00:07:20.316 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.316 Verification LBA range: start 0x20000 length 0x20000 00:07:20.316 Nvme3n1 : 5.08 1663.23 6.50 0.00 0.00 75906.14 9427.10 69367.34 00:07:20.316 [2024-11-03T10:01:48.678Z] =================================================================================================================== 00:07:20.316 [2024-11-03T10:01:48.678Z] Total : 19773.37 77.24 0.00 0.00 76963.37 3075.15 86709.17 00:07:20.887 00:07:20.887 real 0m6.321s 00:07:20.887 user 0m11.916s 00:07:20.887 sys 0m0.186s 00:07:20.887 10:01:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.887 ************************************ 00:07:20.887 END TEST bdev_verify 00:07:20.887 ************************************ 00:07:20.887 10:01:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.887 10:01:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:20.887 10:01:49 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:20.887 10:01:49 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.887 10:01:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.887 ************************************ 00:07:20.887 START TEST bdev_verify_big_io 00:07:20.887 ************************************ 00:07:20.887 10:01:49 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:21.148 [2024-11-03 10:01:49.299259] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:21.148 [2024-11-03 10:01:49.299391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72467 ] 00:07:21.148 [2024-11-03 10:01:49.437257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.148 [2024-11-03 10:01:49.488087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.148 [2024-11-03 10:01:49.488146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.732 Running I/O for 5 seconds... 00:07:26.428 1764.00 IOPS, 110.25 MiB/s [2024-11-03T10:01:55.360Z] 2236.50 IOPS, 139.78 MiB/s [2024-11-03T10:01:55.932Z] 1991.00 IOPS, 124.44 MiB/s [2024-11-03T10:01:55.932Z] 2189.00 IOPS, 136.81 MiB/s 00:07:27.570 Latency(us) 00:07:27.570 [2024-11-03T10:01:55.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.570 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x0 length 0xbd0b 00:07:27.570 Nvme0n1 : 5.73 114.21 7.14 0.00 0.00 1075632.15 32465.53 1542213.32 00:07:27.570 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:27.570 Nvme0n1 : 5.69 129.56 8.10 0.00 0.00 947075.71 49202.41 1032444.06 00:07:27.570 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x0 length 0xa000 00:07:27.570 Nvme1n1 : 5.86 118.17 7.39 0.00 0.00 1008693.39 54848.59 1574477.19 00:07:27.570 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0xa000 length 0xa000 00:07:27.570 Nvme1n1 : 5.69 131.12 8.20 0.00 0.00 911706.72 88322.36 864671.90 00:07:27.570 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x0 length 0x8000 00:07:27.570 Nvme2n1 : 5.87 118.32 7.40 0.00 0.00 974457.99 61704.66 1600288.30 00:07:27.570 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x8000 length 0x8000 00:07:27.570 Nvme2n1 : 5.69 134.95 8.43 0.00 0.00 866410.08 72997.02 871124.68 00:07:27.570 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x0 length 0x8000 00:07:27.570 Nvme2n2 : 5.87 121.32 7.58 0.00 0.00 924758.56 73803.62 1619646.62 00:07:27.570 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x8000 length 0x8000 00:07:27.570 Nvme2n2 : 5.75 137.14 8.57 0.00 0.00 824288.83 53235.40 890483.00 00:07:27.570 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x0 length 0x8000 00:07:27.570 Nvme2n3 : 5.92 132.56 8.29 0.00 0.00 823102.49 13006.38 1664816.05 00:07:27.570 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x8000 length 0x8000 00:07:27.570 Nvme2n3 : 5.84 148.62 9.29 0.00 0.00 738016.09 17543.48 929199.66 00:07:27.570 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x0 length 0x2000 00:07:27.570 Nvme3n1 : 5.95 173.54 10.85 0.00 0.00 610323.95 245.76 909841.33 00:07:27.570 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:27.570 Verification LBA range: start 0x2000 length 0x2000 00:07:27.570 Nvme3n1 : 5.90 173.58 10.85 0.00 0.00 617908.28 589.19 929199.66 00:07:27.570 [2024-11-03T10:01:55.932Z] =================================================================================================================== 00:07:27.570 [2024-11-03T10:01:55.932Z] Total : 1633.11 102.07 0.00 0.00 839843.88 245.76 1664816.05 00:07:28.953 00:07:28.953 real 0m8.051s 00:07:28.953 user 0m15.301s 00:07:28.953 sys 0m0.275s 00:07:28.953 10:01:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.953 ************************************ 00:07:28.953 END TEST bdev_verify_big_io 00:07:28.953 ************************************ 00:07:28.953 10:01:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:29.212 10:01:57 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.212 10:01:57 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:29.212 10:01:57 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.212 10:01:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.212 ************************************ 00:07:29.212 START TEST bdev_write_zeroes 00:07:29.212 ************************************ 00:07:29.212 10:01:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.213 [2024-11-03 10:01:57.395823] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.213 [2024-11-03 10:01:57.395951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72573 ] 00:07:29.213 [2024-11-03 10:01:57.530047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.213 [2024-11-03 10:01:57.560435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.778 Running I/O for 1 seconds... 00:07:30.713 64512.00 IOPS, 252.00 MiB/s 00:07:30.713 Latency(us) 00:07:30.713 [2024-11-03T10:01:59.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.713 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.713 Nvme0n1 : 1.02 10748.45 41.99 0.00 0.00 11885.41 5419.32 24601.21 00:07:30.713 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.713 Nvme1n1 : 1.02 10735.71 41.94 0.00 0.00 11883.25 8670.92 21173.17 00:07:30.713 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.713 Nvme2n1 : 1.02 10723.56 41.89 0.00 0.00 11862.15 8418.86 19761.62 00:07:30.713 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.713 Nvme2n2 : 1.02 10711.48 41.84 0.00 0.00 11822.55 8519.68 19459.15 00:07:30.713 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.713 Nvme2n3 : 1.02 10699.25 41.79 0.00 0.00 11797.62 8418.86 20265.75 00:07:30.713 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.713 Nvme3n1 : 1.02 10687.15 41.75 0.00 0.00 11771.21 5847.83 21979.77 00:07:30.713 [2024-11-03T10:01:59.075Z] =================================================================================================================== 00:07:30.713 [2024-11-03T10:01:59.075Z] Total : 64305.59 251.19 0.00 0.00 11837.03 5419.32 24601.21 00:07:30.972 00:07:30.972 real 0m1.805s 00:07:30.972 user 0m1.552s 00:07:30.972 sys 0m0.143s 00:07:30.972 ************************************ 00:07:30.972 END TEST bdev_write_zeroes 00:07:30.972 ************************************ 00:07:30.972 10:01:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.972 10:01:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:30.972 10:01:59 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.972 10:01:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:30.972 10:01:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.972 10:01:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.972 ************************************ 00:07:30.972 START TEST bdev_json_nonenclosed 00:07:30.972 ************************************ 00:07:30.972 10:01:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.972 [2024-11-03 10:01:59.251411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.972 [2024-11-03 10:01:59.251648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72609 ] 00:07:31.231 [2024-11-03 10:01:59.388137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.231 [2024-11-03 10:01:59.421204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.231 [2024-11-03 10:01:59.421309] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:31.231 [2024-11-03 10:01:59.421328] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:31.231 [2024-11-03 10:01:59.421345] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.231 ************************************ 00:07:31.231 END TEST bdev_json_nonenclosed 00:07:31.231 ************************************ 00:07:31.231 00:07:31.231 real 0m0.305s 00:07:31.231 user 0m0.114s 00:07:31.231 sys 0m0.087s 00:07:31.231 10:01:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.231 10:01:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:31.231 10:01:59 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.231 10:01:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:31.231 10:01:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.231 10:01:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.231 ************************************ 00:07:31.231 START TEST bdev_json_nonarray 00:07:31.231 ************************************ 00:07:31.231 10:01:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.496 [2024-11-03 10:01:59.624220] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:31.496 [2024-11-03 10:01:59.624342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72635 ] 00:07:31.496 [2024-11-03 10:01:59.765206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.496 [2024-11-03 10:01:59.800866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.496 [2024-11-03 10:01:59.800966] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:31.496 [2024-11-03 10:01:59.800989] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:31.496 [2024-11-03 10:01:59.801000] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.781 ************************************ 00:07:31.781 END TEST bdev_json_nonarray 00:07:31.781 ************************************ 00:07:31.781 00:07:31.781 real 0m0.323s 00:07:31.781 user 0m0.133s 00:07:31.781 sys 0m0.086s 00:07:31.781 10:01:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.781 10:01:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:31.781 10:01:59 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:31.781 ************************************ 00:07:31.781 END TEST blockdev_nvme 00:07:31.781 ************************************ 00:07:31.781 00:07:31.781 real 0m31.268s 00:07:31.781 user 0m48.968s 00:07:31.781 sys 0m5.374s 00:07:31.781 10:01:59 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.781 10:01:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.781 10:01:59 -- spdk/autotest.sh@209 -- # uname -s 00:07:31.781 10:01:59 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:31.781 10:01:59 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:31.781 10:01:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:31.781 10:01:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.781 10:01:59 -- common/autotest_common.sh@10 -- # set +x 00:07:31.781 ************************************ 00:07:31.781 START TEST blockdev_nvme_gpt 00:07:31.781 ************************************ 00:07:31.782 10:01:59 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:31.782 * Looking for test storage... 00:07:31.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.782 10:02:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:31.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.782 --rc genhtml_branch_coverage=1 00:07:31.782 --rc genhtml_function_coverage=1 00:07:31.782 --rc genhtml_legend=1 00:07:31.782 --rc geninfo_all_blocks=1 00:07:31.782 --rc geninfo_unexecuted_blocks=1 00:07:31.782 00:07:31.782 ' 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:31.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.782 --rc genhtml_branch_coverage=1 00:07:31.782 --rc genhtml_function_coverage=1 00:07:31.782 --rc genhtml_legend=1 00:07:31.782 --rc geninfo_all_blocks=1 00:07:31.782 --rc geninfo_unexecuted_blocks=1 00:07:31.782 00:07:31.782 ' 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:31.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.782 --rc genhtml_branch_coverage=1 00:07:31.782 --rc genhtml_function_coverage=1 00:07:31.782 --rc genhtml_legend=1 00:07:31.782 --rc geninfo_all_blocks=1 00:07:31.782 --rc geninfo_unexecuted_blocks=1 00:07:31.782 00:07:31.782 ' 00:07:31.782 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:31.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.782 --rc genhtml_branch_coverage=1 00:07:31.782 --rc genhtml_function_coverage=1 00:07:31.782 --rc genhtml_legend=1 00:07:31.782 --rc geninfo_all_blocks=1 00:07:31.782 --rc geninfo_unexecuted_blocks=1 00:07:31.782 00:07:31.782 ' 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:31.782 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:32.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72708 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72708 00:07:32.040 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 72708 ']' 00:07:32.040 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.040 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.040 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.040 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.040 10:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:32.040 10:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.040 [2024-11-03 10:02:00.207606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.040 [2024-11-03 10:02:00.207721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72708 ] 00:07:32.040 [2024-11-03 10:02:00.343821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.040 [2024-11-03 10:02:00.378219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.974 10:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.974 10:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:32.974 10:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:32.974 10:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:32.974 10:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:33.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:33.233 Waiting for block devices as requested 00:07:33.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.494 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.494 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.494 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.782 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:38.782 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:38.782 BYT; 00:07:38.783 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:38.783 BYT; 00:07:38.783 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:38.783 10:02:06 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:38.783 10:02:06 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:39.716 The operation has completed successfully. 00:07:39.716 10:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:41.089 The operation has completed successfully. 00:07:41.089 10:02:09 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:41.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.655 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:41.655 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:41.655 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:41.655 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:41.655 10:02:09 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:41.655 10:02:09 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.655 10:02:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.655 [] 00:07:41.655 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.655 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:41.655 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:41.655 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:41.655 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:41.913 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:41.913 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.913 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.172 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:42.172 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:42.173 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2b3fdc1d-5fe0-436f-b80c-bb799df2f4f9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2b3fdc1d-5fe0-436f-b80c-bb799df2f4f9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e40d561b-2bca-49a6-aa3c-5a87b2582e46"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e40d561b-2bca-49a6-aa3c-5a87b2582e46",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "805cbff4-4779-432c-b106-1754cb35bf0b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "805cbff4-4779-432c-b106-1754cb35bf0b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b511f7ff-e2c1-49f5-bb76-9b1986e6a224"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b511f7ff-e2c1-49f5-bb76-9b1986e6a224",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "271ef6a2-e94e-411d-86a6-22a0cc247c49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "271ef6a2-e94e-411d-86a6-22a0cc247c49",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:42.173 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:42.173 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:42.173 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:42.173 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72708 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 72708 ']' 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 72708 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72708 00:07:42.173 killing process with pid 72708 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72708' 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 72708 00:07:42.173 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 72708 00:07:42.431 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:42.431 10:02:10 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:42.431 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:42.431 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.431 10:02:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.431 ************************************ 00:07:42.431 START TEST bdev_hello_world 00:07:42.431 ************************************ 00:07:42.431 10:02:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:42.689 [2024-11-03 10:02:10.836001] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:42.689 [2024-11-03 10:02:10.836101] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73326 ] 00:07:42.689 [2024-11-03 10:02:10.972557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.689 [2024-11-03 10:02:11.004780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.255 [2024-11-03 10:02:11.372183] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:43.255 [2024-11-03 10:02:11.372389] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:43.255 [2024-11-03 10:02:11.372426] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:43.255 [2024-11-03 10:02:11.374458] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:43.255 [2024-11-03 10:02:11.375121] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:43.255 [2024-11-03 10:02:11.375155] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:43.255 [2024-11-03 10:02:11.375523] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:43.255 00:07:43.255 [2024-11-03 10:02:11.375549] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:43.255 00:07:43.255 real 0m0.757s 00:07:43.255 user 0m0.499s 00:07:43.255 sys 0m0.155s 00:07:43.255 ************************************ 00:07:43.255 END TEST bdev_hello_world 00:07:43.255 ************************************ 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:43.255 10:02:11 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:43.255 10:02:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:43.255 10:02:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.255 10:02:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.255 ************************************ 00:07:43.255 START TEST bdev_bounds 00:07:43.255 ************************************ 00:07:43.255 Process bdevio pid: 73352 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73352 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73352' 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73352 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73352 ']' 00:07:43.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.255 10:02:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.256 10:02:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.256 10:02:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.256 10:02:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.256 10:02:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:43.256 10:02:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:43.514 [2024-11-03 10:02:11.653072] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:43.514 [2024-11-03 10:02:11.653336] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73352 ] 00:07:43.514 [2024-11-03 10:02:11.786670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.514 [2024-11-03 10:02:11.820644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.514 [2024-11-03 10:02:11.820927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.514 [2024-11-03 10:02:11.820964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.448 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.448 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:44.448 10:02:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:44.448 I/O targets: 00:07:44.448 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:44.448 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:44.448 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:44.448 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:44.448 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:44.448 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:44.448 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:44.448 00:07:44.448 00:07:44.448 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.448 http://cunit.sourceforge.net/ 00:07:44.448 00:07:44.448 00:07:44.448 Suite: bdevio tests on: Nvme3n1 00:07:44.448 Test: blockdev write read block ...passed 00:07:44.448 Test: blockdev write zeroes read block ...passed 00:07:44.448 Test: blockdev write zeroes read no split ...passed 00:07:44.448 Test: blockdev write zeroes read split ...passed 00:07:44.448 Test: blockdev write zeroes read split partial ...passed 00:07:44.448 Test: blockdev reset ...[2024-11-03 10:02:12.604885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:44.448 [2024-11-03 10:02:12.607401] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.448 passed 00:07:44.448 Test: blockdev write read 8 blocks ...passed 00:07:44.448 Test: blockdev write read size > 128k ...passed 00:07:44.448 Test: blockdev write read invalid size ...passed 00:07:44.448 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.448 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.448 Test: blockdev write read max offset ...passed 00:07:44.448 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.448 Test: blockdev writev readv 8 blocks ...passed 00:07:44.448 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.448 Test: blockdev writev readv block ...passed 00:07:44.448 Test: blockdev writev readv size > 128k ...passed 00:07:44.448 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.448 Test: blockdev comparev and writev ...[2024-11-03 10:02:12.616494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:44.448 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bec0a000 len:0x1000 00:07:44.448 [2024-11-03 10:02:12.616645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.448 passed 00:07:44.448 Test: blockdev nvme passthru vendor specific ...[2024-11-03 10:02:12.617840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:44.448 [2024-11-03 10:02:12.617871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:44.448 passed 00:07:44.448 Test: blockdev nvme admin passthru ...passed 00:07:44.448 Test: blockdev copy ...passed 00:07:44.448 Suite: bdevio tests on: Nvme2n3 00:07:44.448 Test: blockdev write read block ...passed 00:07:44.448 Test: blockdev write zeroes read block ...passed 00:07:44.448 Test: blockdev write zeroes read no split ...passed 00:07:44.448 Test: blockdev write zeroes read split ...passed 00:07:44.448 Test: blockdev write zeroes read split partial ...passed 00:07:44.448 Test: blockdev reset ...[2024-11-03 10:02:12.643961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:44.448 [2024-11-03 10:02:12.646919] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.448 passed 00:07:44.448 Test: blockdev write read 8 blocks ...passed 00:07:44.448 Test: blockdev write read size > 128k ...passed 00:07:44.448 Test: blockdev write read invalid size ...passed 00:07:44.448 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.448 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.448 Test: blockdev write read max offset ...passed 00:07:44.448 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.448 Test: blockdev writev readv 8 blocks ...passed 00:07:44.448 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.448 Test: blockdev writev readv block ...passed 00:07:44.448 Test: blockdev writev readv size > 128k ...passed 00:07:44.448 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.448 Test: blockdev comparev and writev ...[2024-11-03 10:02:12.661450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aac04000 len:0x1000 00:07:44.448 [2024-11-03 10:02:12.661497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.448 passed 00:07:44.448 Test: blockdev nvme passthru rw ...passed 00:07:44.448 Test: blockdev nvme passthru vendor specific ...[2024-11-03 10:02:12.663171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:44.448 Test: blockdev nvme admin passthru ...passed cid:190 PRP1 0x0 PRP2 0x0 00:07:44.448 [2024-11-03 10:02:12.663311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:44.448 00:07:44.448 Test: blockdev copy ...passed 00:07:44.448 Suite: bdevio tests on: Nvme2n2 00:07:44.448 Test: blockdev write read block ...passed 00:07:44.448 Test: blockdev write zeroes read block ...passed 00:07:44.448 Test: blockdev write zeroes read no split ...passed 00:07:44.448 Test: blockdev write zeroes read split ...passed 00:07:44.448 Test: blockdev write zeroes read split partial ...passed 00:07:44.448 Test: blockdev reset ...[2024-11-03 10:02:12.682246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:44.449 passed 00:07:44.449 Test: blockdev write read 8 blocks ...[2024-11-03 10:02:12.684402] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.449 passed 00:07:44.449 Test: blockdev write read size > 128k ...passed 00:07:44.449 Test: blockdev write read invalid size ...passed 00:07:44.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.449 Test: blockdev write read max offset ...passed 00:07:44.449 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.449 Test: blockdev writev readv 8 blocks ...passed 00:07:44.449 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.449 Test: blockdev writev readv block ...passed 00:07:44.449 Test: blockdev writev readv size > 128k ...passed 00:07:44.449 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.449 Test: blockdev comparev and writev ...[2024-11-03 10:02:12.699854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aac04000 len:0x1000 00:07:44.449 [2024-11-03 10:02:12.699909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.449 passed 00:07:44.449 Test: blockdev nvme passthru rw ...passed 00:07:44.449 Test: blockdev nvme passthru vendor specific ...[2024-11-03 10:02:12.702461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:44.449 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:44.449 [2024-11-03 10:02:12.702592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:44.449 passed 00:07:44.449 Test: blockdev copy ...passed 00:07:44.449 Suite: bdevio tests on: Nvme2n1 00:07:44.449 Test: blockdev write read block ...passed 00:07:44.449 Test: blockdev write zeroes read block ...passed 00:07:44.449 Test: blockdev write zeroes read no split ...passed 00:07:44.449 Test: blockdev write zeroes read split ...passed 00:07:44.449 Test: blockdev write zeroes read split partial ...passed 00:07:44.449 Test: blockdev reset ...[2024-11-03 10:02:12.731912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:44.449 [2024-11-03 10:02:12.734274] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.449 passed 00:07:44.449 Test: blockdev write read 8 blocks ...passed 00:07:44.449 Test: blockdev write read size > 128k ...passed 00:07:44.449 Test: blockdev write read invalid size ...passed 00:07:44.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.449 Test: blockdev write read max offset ...passed 00:07:44.449 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.449 Test: blockdev writev readv 8 blocks ...passed 00:07:44.449 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.449 Test: blockdev writev readv block ...passed 00:07:44.449 Test: blockdev writev readv size > 128k ...passed 00:07:44.449 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.449 Test: blockdev comparev and writev ...[2024-11-03 10:02:12.739861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aac06000 len:0x1000 00:07:44.449 [2024-11-03 10:02:12.739898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.449 passed 00:07:44.449 Test: blockdev nvme passthru rw ...passed 00:07:44.449 Test: blockdev nvme passthru vendor specific ...passed 00:07:44.449 Test: blockdev nvme admin passthru ...[2024-11-03 10:02:12.740451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:44.449 [2024-11-03 10:02:12.740477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:44.449 passed 00:07:44.449 Test: blockdev copy ...passed 00:07:44.449 Suite: bdevio tests on: Nvme1n1p2 00:07:44.449 Test: blockdev write read block ...passed 00:07:44.449 Test: blockdev write zeroes read block ...passed 00:07:44.449 Test: blockdev write zeroes read no split ...passed 00:07:44.449 Test: blockdev write zeroes read split ...passed 00:07:44.449 Test: blockdev write zeroes read split partial ...passed 00:07:44.449 Test: blockdev reset ...[2024-11-03 10:02:12.754871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:44.449 [2024-11-03 10:02:12.756544] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.449 passed 00:07:44.449 Test: blockdev write read 8 blocks ...passed 00:07:44.449 Test: blockdev write read size > 128k ...passed 00:07:44.449 Test: blockdev write read invalid size ...passed 00:07:44.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.449 Test: blockdev write read max offset ...passed 00:07:44.449 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.449 Test: blockdev writev readv 8 blocks ...passed 00:07:44.449 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.449 Test: blockdev writev readv block ...passed 00:07:44.449 Test: blockdev writev readv size > 128k ...passed 00:07:44.449 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.449 Test: blockdev comparev and writev ...[2024-11-03 10:02:12.770860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2aac02000 len:0x1000 00:07:44.449 [2024-11-03 10:02:12.770900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.449 passed 00:07:44.449 Test: blockdev nvme passthru rw ...passed 00:07:44.449 Test: blockdev nvme passthru vendor specific ...passed 00:07:44.449 Test: blockdev nvme admin passthru ...passed 00:07:44.449 Test: blockdev copy ...passed 00:07:44.449 Suite: bdevio tests on: Nvme1n1p1 00:07:44.449 Test: blockdev write read block ...passed 00:07:44.449 Test: blockdev write zeroes read block ...passed 00:07:44.449 Test: blockdev write zeroes read no split ...passed 00:07:44.449 Test: blockdev write zeroes read split ...passed 00:07:44.449 Test: blockdev write zeroes read split partial ...passed 00:07:44.449 Test: blockdev reset ...[2024-11-03 10:02:12.788637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:44.449 passed 00:07:44.449 Test: blockdev write read 8 blocks ...[2024-11-03 10:02:12.790282] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.449 passed 00:07:44.449 Test: blockdev write read size > 128k ...passed 00:07:44.449 Test: blockdev write read invalid size ...passed 00:07:44.449 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.449 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.449 Test: blockdev write read max offset ...passed 00:07:44.449 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.449 Test: blockdev writev readv 8 blocks ...passed 00:07:44.449 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.449 Test: blockdev writev readv block ...passed 00:07:44.449 Test: blockdev writev readv size > 128k ...passed 00:07:44.449 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.449 Test: blockdev comparev and writev ...[2024-11-03 10:02:12.804201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c263b000 len:0x1000 00:07:44.449 [2024-11-03 10:02:12.804249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.449 passed 00:07:44.449 Test: blockdev nvme passthru rw ...passed 00:07:44.449 Test: blockdev nvme passthru vendor specific ...passed 00:07:44.449 Test: blockdev nvme admin passthru ...passed 00:07:44.449 Test: blockdev copy ...passed 00:07:44.449 Suite: bdevio tests on: Nvme0n1 00:07:44.449 Test: blockdev write read block ...passed 00:07:44.707 Test: blockdev write zeroes read block ...passed 00:07:44.707 Test: blockdev write zeroes read no split ...passed 00:07:44.707 Test: blockdev write zeroes read split ...passed 00:07:44.707 Test: blockdev write zeroes read split partial ...passed 00:07:44.707 Test: blockdev reset ...[2024-11-03 10:02:12.822201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:44.707 passed 00:07:44.707 Test: blockdev write read 8 blocks ...[2024-11-03 10:02:12.824336] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.707 passed 00:07:44.707 Test: blockdev write read size > 128k ...passed 00:07:44.707 Test: blockdev write read invalid size ...passed 00:07:44.707 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.707 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.707 Test: blockdev write read max offset ...passed 00:07:44.707 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.707 Test: blockdev writev readv 8 blocks ...passed 00:07:44.707 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.707 Test: blockdev writev readv block ...passed 00:07:44.707 Test: blockdev writev readv size > 128k ...passed 00:07:44.707 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.707 Test: blockdev comparev and writev ...passed 00:07:44.707 Test: blockdev nvme passthru rw ...[2024-11-03 10:02:12.836692] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:44.707 separate metadata which is not supported yet. 00:07:44.707 passed 00:07:44.707 Test: blockdev nvme passthru vendor specific ...[2024-11-03 10:02:12.837868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:44.707 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:44.707 [2024-11-03 10:02:12.837969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:44.707 passed 00:07:44.707 Test: blockdev copy ...passed 00:07:44.707 00:07:44.707 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.707 suites 7 7 n/a 0 0 00:07:44.707 tests 161 161 161 0 0 00:07:44.707 asserts 1025 1025 1025 0 n/a 00:07:44.707 00:07:44.707 Elapsed time = 0.582 seconds 00:07:44.707 0 00:07:44.707 10:02:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73352 00:07:44.707 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73352 ']' 00:07:44.707 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73352 00:07:44.708 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:44.708 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.708 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73352 00:07:44.708 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.708 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.708 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73352' 00:07:44.708 killing process with pid 73352 00:07:44.708 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73352 00:07:44.708 10:02:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73352 00:07:44.708 10:02:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:44.708 00:07:44.708 real 0m1.446s 00:07:44.708 user 0m3.635s 00:07:44.708 sys 0m0.259s 00:07:44.708 10:02:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.708 10:02:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:44.708 ************************************ 00:07:44.708 END TEST bdev_bounds 00:07:44.708 ************************************ 00:07:44.966 10:02:13 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:44.966 10:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:44.966 10:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.966 10:02:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.966 ************************************ 00:07:44.966 START TEST bdev_nbd 00:07:44.966 ************************************ 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73400 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73400 /var/tmp/spdk-nbd.sock 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73400 ']' 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.966 10:02:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:44.966 [2024-11-03 10:02:13.163506] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.966 [2024-11-03 10:02:13.163611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.966 [2024-11-03 10:02:13.299232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.224 [2024-11-03 10:02:13.331707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.789 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.046 1+0 records in 00:07:46.046 1+0 records out 00:07:46.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813734 s, 5.0 MB/s 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.046 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.304 1+0 records in 00:07:46.304 1+0 records out 00:07:46.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000892317 s, 4.6 MB/s 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.304 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.562 1+0 records in 00:07:46.562 1+0 records out 00:07:46.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108083 s, 3.8 MB/s 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.562 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.820 1+0 records in 00:07:46.820 1+0 records out 00:07:46.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010766 s, 3.8 MB/s 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.820 10:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:46.820 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:46.820 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:46.820 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:46.820 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:46.820 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.820 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.820 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.820 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.078 1+0 records in 00:07:47.078 1+0 records out 00:07:47.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115712 s, 3.5 MB/s 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.078 1+0 records in 00:07:47.078 1+0 records out 00:07:47.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104729 s, 3.9 MB/s 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.078 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.336 1+0 records in 00:07:47.336 1+0 records out 00:07:47.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113797 s, 3.6 MB/s 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.336 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd0", 00:07:47.594 "bdev_name": "Nvme0n1" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd1", 00:07:47.594 "bdev_name": "Nvme1n1p1" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd2", 00:07:47.594 "bdev_name": "Nvme1n1p2" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd3", 00:07:47.594 "bdev_name": "Nvme2n1" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd4", 00:07:47.594 "bdev_name": "Nvme2n2" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd5", 00:07:47.594 "bdev_name": "Nvme2n3" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd6", 00:07:47.594 "bdev_name": "Nvme3n1" 00:07:47.594 } 00:07:47.594 ]' 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd0", 00:07:47.594 "bdev_name": "Nvme0n1" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd1", 00:07:47.594 "bdev_name": "Nvme1n1p1" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd2", 00:07:47.594 "bdev_name": "Nvme1n1p2" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd3", 00:07:47.594 "bdev_name": "Nvme2n1" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd4", 00:07:47.594 "bdev_name": "Nvme2n2" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd5", 00:07:47.594 "bdev_name": "Nvme2n3" 00:07:47.594 }, 00:07:47.594 { 00:07:47.594 "nbd_device": "/dev/nbd6", 00:07:47.594 "bdev_name": "Nvme3n1" 00:07:47.594 } 00:07:47.594 ]' 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.594 10:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.852 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.109 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.367 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.625 10:02:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.883 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.140 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:49.398 /dev/nbd0 00:07:49.398 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.656 1+0 records in 00:07:49.656 1+0 records out 00:07:49.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468338 s, 8.7 MB/s 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:49.656 /dev/nbd1 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.656 1+0 records in 00:07:49.656 1+0 records out 00:07:49.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044587 s, 9.2 MB/s 00:07:49.656 10:02:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.656 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.656 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.656 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.656 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.656 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.656 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.656 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:49.914 /dev/nbd10 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.914 1+0 records in 00:07:49.914 1+0 records out 00:07:49.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376615 s, 10.9 MB/s 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.914 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:50.172 /dev/nbd11 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.172 1+0 records in 00:07:50.172 1+0 records out 00:07:50.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435005 s, 9.4 MB/s 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:50.172 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:50.430 /dev/nbd12 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.430 1+0 records in 00:07:50.430 1+0 records out 00:07:50.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446805 s, 9.2 MB/s 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:50.430 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:50.689 /dev/nbd13 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:50.689 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.689 1+0 records in 00:07:50.689 1+0 records out 00:07:50.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477582 s, 8.6 MB/s 00:07:50.690 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.690 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:50.690 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.690 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:50.690 10:02:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:50.690 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.690 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:50.690 10:02:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:50.948 /dev/nbd14 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.948 1+0 records in 00:07:50.948 1+0 records out 00:07:50.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436251 s, 9.4 MB/s 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.948 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd0", 00:07:51.206 "bdev_name": "Nvme0n1" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd1", 00:07:51.206 "bdev_name": "Nvme1n1p1" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd10", 00:07:51.206 "bdev_name": "Nvme1n1p2" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd11", 00:07:51.206 "bdev_name": "Nvme2n1" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd12", 00:07:51.206 "bdev_name": "Nvme2n2" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd13", 00:07:51.206 "bdev_name": "Nvme2n3" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd14", 00:07:51.206 "bdev_name": "Nvme3n1" 00:07:51.206 } 00:07:51.206 ]' 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd0", 00:07:51.206 "bdev_name": "Nvme0n1" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd1", 00:07:51.206 "bdev_name": "Nvme1n1p1" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd10", 00:07:51.206 "bdev_name": "Nvme1n1p2" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd11", 00:07:51.206 "bdev_name": "Nvme2n1" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd12", 00:07:51.206 "bdev_name": "Nvme2n2" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd13", 00:07:51.206 "bdev_name": "Nvme2n3" 00:07:51.206 }, 00:07:51.206 { 00:07:51.206 "nbd_device": "/dev/nbd14", 00:07:51.206 "bdev_name": "Nvme3n1" 00:07:51.206 } 00:07:51.206 ]' 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:51.206 /dev/nbd1 00:07:51.206 /dev/nbd10 00:07:51.206 /dev/nbd11 00:07:51.206 /dev/nbd12 00:07:51.206 /dev/nbd13 00:07:51.206 /dev/nbd14' 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:51.206 /dev/nbd1 00:07:51.206 /dev/nbd10 00:07:51.206 /dev/nbd11 00:07:51.206 /dev/nbd12 00:07:51.206 /dev/nbd13 00:07:51.206 /dev/nbd14' 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:51.206 256+0 records in 00:07:51.206 256+0 records out 00:07:51.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109509 s, 95.8 MB/s 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:51.206 256+0 records in 00:07:51.206 256+0 records out 00:07:51.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0617753 s, 17.0 MB/s 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:51.206 256+0 records in 00:07:51.206 256+0 records out 00:07:51.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0634836 s, 16.5 MB/s 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:51.206 256+0 records in 00:07:51.206 256+0 records out 00:07:51.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.060989 s, 17.2 MB/s 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.206 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:51.464 256+0 records in 00:07:51.464 256+0 records out 00:07:51.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.060934 s, 17.2 MB/s 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:51.464 256+0 records in 00:07:51.464 256+0 records out 00:07:51.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0612717 s, 17.1 MB/s 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:51.464 256+0 records in 00:07:51.464 256+0 records out 00:07:51.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0582279 s, 18.0 MB/s 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:51.464 256+0 records in 00:07:51.464 256+0 records out 00:07:51.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0586032 s, 17.9 MB/s 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.464 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.722 10:02:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.722 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.979 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.236 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.494 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.751 10:02:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.751 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.029 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.301 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:53.302 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:53.559 malloc_lvol_verify 00:07:53.559 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:53.816 f632fd4a-9c2d-437e-94b9-87726e199cd8 00:07:53.816 10:02:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:53.816 a790a428-1153-4c6a-9bf0-fcfc01066b10 00:07:53.816 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:54.073 /dev/nbd0 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:54.073 mke2fs 1.47.0 (5-Feb-2023) 00:07:54.073 Discarding device blocks: 0/4096 done 00:07:54.073 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:54.073 00:07:54.073 Allocating group tables: 0/1 done 00:07:54.073 Writing inode tables: 0/1 done 00:07:54.073 Creating journal (1024 blocks): done 00:07:54.073 Writing superblocks and filesystem accounting information: 0/1 done 00:07:54.073 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:54.073 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.074 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73400 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73400 ']' 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73400 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73400 00:07:54.331 killing process with pid 73400 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73400' 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73400 00:07:54.331 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73400 00:07:54.589 ************************************ 00:07:54.589 END TEST bdev_nbd 00:07:54.589 ************************************ 00:07:54.589 10:02:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:54.589 00:07:54.589 real 0m9.671s 00:07:54.589 user 0m14.158s 00:07:54.589 sys 0m3.324s 00:07:54.589 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.589 10:02:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:54.589 10:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:54.589 10:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:54.589 skipping fio tests on NVMe due to multi-ns failures. 00:07:54.589 10:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:54.589 10:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:54.589 10:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:54.589 10:02:22 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:54.589 10:02:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:54.589 10:02:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.589 10:02:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:54.589 ************************************ 00:07:54.589 START TEST bdev_verify 00:07:54.589 ************************************ 00:07:54.589 10:02:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:54.589 [2024-11-03 10:02:22.891421] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:54.589 [2024-11-03 10:02:22.891526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73802 ] 00:07:54.848 [2024-11-03 10:02:23.026238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:54.848 [2024-11-03 10:02:23.055555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.848 [2024-11-03 10:02:23.055636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.107 Running I/O for 5 seconds... 00:07:57.410 25024.00 IOPS, 97.75 MiB/s [2024-11-03T10:02:26.707Z] 25216.00 IOPS, 98.50 MiB/s [2024-11-03T10:02:28.104Z] 24853.33 IOPS, 97.08 MiB/s [2024-11-03T10:02:28.669Z] 23568.00 IOPS, 92.06 MiB/s [2024-11-03T10:02:28.669Z] 22950.40 IOPS, 89.65 MiB/s 00:08:00.307 Latency(us) 00:08:00.307 [2024-11-03T10:02:28.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.307 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x0 length 0xbd0bd 00:08:00.307 Nvme0n1 : 5.06 1745.53 6.82 0.00 0.00 73098.89 11998.13 75013.51 00:08:00.307 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:00.307 Nvme0n1 : 5.06 1491.20 5.83 0.00 0.00 85649.70 12149.37 88322.36 00:08:00.307 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x0 length 0x4ff80 00:08:00.307 Nvme1n1p1 : 5.06 1744.93 6.82 0.00 0.00 73028.13 13409.67 70980.53 00:08:00.307 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:00.307 Nvme1n1p1 : 5.07 1490.40 5.82 0.00 0.00 85430.74 14115.45 75820.11 00:08:00.307 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x0 length 0x4ff7f 00:08:00.307 Nvme1n1p2 : 5.06 1744.31 6.81 0.00 0.00 72960.61 14518.74 69367.34 00:08:00.307 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:00.307 Nvme1n1p2 : 5.07 1489.97 5.82 0.00 0.00 85273.66 12905.55 68964.04 00:08:00.307 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x0 length 0x80000 00:08:00.307 Nvme2n1 : 5.06 1743.80 6.81 0.00 0.00 72875.49 15728.64 71787.13 00:08:00.307 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x80000 length 0x80000 00:08:00.307 Nvme2n1 : 5.07 1489.36 5.82 0.00 0.00 85137.06 13913.80 67350.84 00:08:00.307 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x0 length 0x80000 00:08:00.307 Nvme2n2 : 5.07 1742.80 6.81 0.00 0.00 72810.83 15728.64 72190.42 00:08:00.307 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x80000 length 0x80000 00:08:00.307 Nvme2n2 : 5.07 1488.46 5.81 0.00 0.00 85030.68 15829.46 68560.74 00:08:00.307 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x0 length 0x80000 00:08:00.307 Nvme2n3 : 5.07 1741.86 6.80 0.00 0.00 72727.00 14014.62 74206.92 00:08:00.307 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:00.307 Verification LBA range: start 0x80000 length 0x80000 00:08:00.307 Nvme2n3 : 5.07 1488.18 5.81 0.00 0.00 84921.97 14720.39 71383.83 00:08:00.308 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:00.308 Verification LBA range: start 0x0 length 0x20000 00:08:00.308 Nvme3n1 : 5.08 1751.58 6.84 0.00 0.00 72263.57 1852.65 75820.11 00:08:00.308 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:00.308 Verification LBA range: start 0x20000 length 0x20000 00:08:00.308 Nvme3n1 : 5.08 1487.88 5.81 0.00 0.00 84850.31 12754.31 74206.92 00:08:00.308 [2024-11-03T10:02:28.670Z] =================================================================================================================== 00:08:00.308 [2024-11-03T10:02:28.670Z] Total : 22640.27 88.44 0.00 0.00 78517.83 1852.65 88322.36 00:08:01.246 ************************************ 00:08:01.246 END TEST bdev_verify 00:08:01.246 ************************************ 00:08:01.246 00:08:01.246 real 0m6.453s 00:08:01.246 user 0m12.234s 00:08:01.246 sys 0m0.180s 00:08:01.246 10:02:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.246 10:02:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:01.246 10:02:29 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:01.246 10:02:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:01.246 10:02:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.246 10:02:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:01.246 ************************************ 00:08:01.246 START TEST bdev_verify_big_io 00:08:01.246 ************************************ 00:08:01.246 10:02:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:01.246 [2024-11-03 10:02:29.395202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:01.246 [2024-11-03 10:02:29.395303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73890 ] 00:08:01.246 [2024-11-03 10:02:29.528496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:01.246 [2024-11-03 10:02:29.565463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.246 [2024-11-03 10:02:29.565504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.815 Running I/O for 5 seconds... 00:08:07.973 1897.00 IOPS, 118.56 MiB/s [2024-11-03T10:02:36.335Z] 3808.00 IOPS, 238.00 MiB/s 00:08:07.973 Latency(us) 00:08:07.973 [2024-11-03T10:02:36.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.973 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x0 length 0xbd0b 00:08:07.973 Nvme0n1 : 5.79 122.81 7.68 0.00 0.00 993184.18 12048.54 1297007.85 00:08:07.973 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:07.973 Nvme0n1 : 5.66 118.10 7.38 0.00 0.00 1041303.29 15627.82 980821.86 00:08:07.973 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x0 length 0x4ff8 00:08:07.973 Nvme1n1p1 : 5.79 122.09 7.63 0.00 0.00 958137.60 101631.21 1096971.82 00:08:07.973 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:07.973 Nvme1n1p1 : 5.72 123.00 7.69 0.00 0.00 987351.11 55655.19 896935.78 00:08:07.973 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x0 length 0x4ff7 00:08:07.973 Nvme1n1p2 : 5.87 127.02 7.94 0.00 0.00 899924.30 118569.75 903388.55 00:08:07.973 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:07.973 Nvme1n1p2 : 5.73 122.96 7.68 0.00 0.00 962253.80 56461.78 922746.88 00:08:07.973 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x0 length 0x8000 00:08:07.973 Nvme2n1 : 5.91 133.67 8.35 0.00 0.00 839007.12 21072.34 1458327.24 00:08:07.973 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x8000 length 0x8000 00:08:07.973 Nvme2n1 : 5.83 127.82 7.99 0.00 0.00 905499.51 60898.07 1103424.59 00:08:07.973 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x0 length 0x8000 00:08:07.973 Nvme2n2 : 5.92 138.57 8.66 0.00 0.00 780766.48 14317.10 1490591.11 00:08:07.973 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x8000 length 0x8000 00:08:07.973 Nvme2n2 : 5.83 127.11 7.94 0.00 0.00 885509.81 61301.37 993727.41 00:08:07.973 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x0 length 0x8000 00:08:07.973 Nvme2n3 : 6.02 155.56 9.72 0.00 0.00 680974.54 11342.77 2181038.08 00:08:07.973 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x8000 length 0x8000 00:08:07.973 Nvme2n3 : 5.86 135.38 8.46 0.00 0.00 816550.62 33272.12 1013085.74 00:08:07.973 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x0 length 0x2000 00:08:07.973 Nvme3n1 : 6.12 181.16 11.32 0.00 0.00 567555.90 392.27 2193943.63 00:08:07.973 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:07.973 Verification LBA range: start 0x2000 length 0x2000 00:08:07.974 Nvme3n1 : 5.87 147.59 9.22 0.00 0.00 732837.43 2545.82 1116330.14 00:08:07.974 [2024-11-03T10:02:36.336Z] =================================================================================================================== 00:08:07.974 [2024-11-03T10:02:36.336Z] Total : 1882.83 117.68 0.00 0.00 842940.75 392.27 2193943.63 00:08:09.873 00:08:09.873 real 0m8.508s 00:08:09.873 user 0m16.330s 00:08:09.873 sys 0m0.210s 00:08:09.873 10:02:37 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.873 ************************************ 00:08:09.873 END TEST bdev_verify_big_io 00:08:09.873 ************************************ 00:08:09.873 10:02:37 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:09.873 10:02:37 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.873 10:02:37 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:09.873 10:02:37 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.873 10:02:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.873 ************************************ 00:08:09.873 START TEST bdev_write_zeroes 00:08:09.873 ************************************ 00:08:09.873 10:02:37 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.873 [2024-11-03 10:02:37.964517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:09.873 [2024-11-03 10:02:37.964618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73992 ] 00:08:09.873 [2024-11-03 10:02:38.097762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.873 [2024-11-03 10:02:38.130177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.440 Running I/O for 1 seconds... 00:08:11.377 64512.00 IOPS, 252.00 MiB/s 00:08:11.377 Latency(us) 00:08:11.377 [2024-11-03T10:02:39.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.377 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.377 Nvme0n1 : 1.03 9132.99 35.68 0.00 0.00 13978.63 6276.33 27424.30 00:08:11.377 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.377 Nvme1n1p1 : 1.03 9096.46 35.53 0.00 0.00 13951.63 9175.04 26416.05 00:08:11.377 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.377 Nvme1n1p2 : 1.03 9121.67 35.63 0.00 0.00 13947.46 9527.93 25811.10 00:08:11.377 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.377 Nvme2n1 : 1.03 9111.48 35.59 0.00 0.00 13943.46 9981.64 24399.56 00:08:11.377 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.377 Nvme2n2 : 1.03 9101.28 35.55 0.00 0.00 13925.56 10384.94 24802.86 00:08:11.377 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.377 Nvme2n3 : 1.03 9090.92 35.51 0.00 0.00 13914.67 9275.86 26012.75 00:08:11.377 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:11.377 Nvme3n1 : 1.04 9080.72 35.47 0.00 0.00 13900.60 9074.22 27222.65 00:08:11.377 [2024-11-03T10:02:39.739Z] =================================================================================================================== 00:08:11.377 [2024-11-03T10:02:39.739Z] Total : 63735.51 248.97 0.00 0.00 13937.42 6276.33 27424.30 00:08:11.639 00:08:11.639 real 0m1.842s 00:08:11.639 user 0m1.571s 00:08:11.639 sys 0m0.159s 00:08:11.639 ************************************ 00:08:11.639 END TEST bdev_write_zeroes 00:08:11.639 10:02:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.639 10:02:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:11.639 ************************************ 00:08:11.639 10:02:39 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.639 10:02:39 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:11.639 10:02:39 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.639 10:02:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.639 ************************************ 00:08:11.639 START TEST bdev_json_nonenclosed 00:08:11.639 ************************************ 00:08:11.639 10:02:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.639 [2024-11-03 10:02:39.865363] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:11.639 [2024-11-03 10:02:39.865503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74033 ] 00:08:11.899 [2024-11-03 10:02:40.000399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.899 [2024-11-03 10:02:40.055558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.899 [2024-11-03 10:02:40.055697] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:11.899 [2024-11-03 10:02:40.055716] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.899 [2024-11-03 10:02:40.055729] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.899 00:08:11.899 real 0m0.364s 00:08:11.899 user 0m0.159s 00:08:11.899 sys 0m0.100s 00:08:11.899 10:02:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.899 ************************************ 00:08:11.899 END TEST bdev_json_nonenclosed 00:08:11.899 ************************************ 00:08:11.899 10:02:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:11.899 10:02:40 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.899 10:02:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:11.899 10:02:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.899 10:02:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.899 ************************************ 00:08:11.899 START TEST bdev_json_nonarray 00:08:11.899 ************************************ 00:08:11.899 10:02:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.159 [2024-11-03 10:02:40.280980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:12.159 [2024-11-03 10:02:40.281087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74054 ] 00:08:12.159 [2024-11-03 10:02:40.415801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.159 [2024-11-03 10:02:40.466494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.159 [2024-11-03 10:02:40.466621] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:12.159 [2024-11-03 10:02:40.466645] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:12.159 [2024-11-03 10:02:40.466657] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.420 00:08:12.420 real 0m0.343s 00:08:12.420 user 0m0.144s 00:08:12.420 sys 0m0.095s 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:12.420 ************************************ 00:08:12.420 END TEST bdev_json_nonarray 00:08:12.420 ************************************ 00:08:12.420 10:02:40 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:12.420 10:02:40 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:12.420 10:02:40 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:12.420 10:02:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.420 10:02:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.420 10:02:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.420 ************************************ 00:08:12.420 START TEST bdev_gpt_uuid 00:08:12.420 ************************************ 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74074 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74074 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74074 ']' 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.420 10:02:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:12.420 [2024-11-03 10:02:40.707561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:12.420 [2024-11-03 10:02:40.707694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74074 ] 00:08:12.682 [2024-11-03 10:02:40.841557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.682 [2024-11-03 10:02:40.874556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.253 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.253 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:13.253 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:13.253 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.253 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.515 Some configs were skipped because the RPC state that can call them passed over. 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:13.515 { 00:08:13.515 "name": "Nvme1n1p1", 00:08:13.515 "aliases": [ 00:08:13.515 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:13.515 ], 00:08:13.515 "product_name": "GPT Disk", 00:08:13.515 "block_size": 4096, 00:08:13.515 "num_blocks": 655104, 00:08:13.515 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:13.515 "assigned_rate_limits": { 00:08:13.515 "rw_ios_per_sec": 0, 00:08:13.515 "rw_mbytes_per_sec": 0, 00:08:13.515 "r_mbytes_per_sec": 0, 00:08:13.515 "w_mbytes_per_sec": 0 00:08:13.515 }, 00:08:13.515 "claimed": false, 00:08:13.515 "zoned": false, 00:08:13.515 "supported_io_types": { 00:08:13.515 "read": true, 00:08:13.515 "write": true, 00:08:13.515 "unmap": true, 00:08:13.515 "flush": true, 00:08:13.515 "reset": true, 00:08:13.515 "nvme_admin": false, 00:08:13.515 "nvme_io": false, 00:08:13.515 "nvme_io_md": false, 00:08:13.515 "write_zeroes": true, 00:08:13.515 "zcopy": false, 00:08:13.515 "get_zone_info": false, 00:08:13.515 "zone_management": false, 00:08:13.515 "zone_append": false, 00:08:13.515 "compare": true, 00:08:13.515 "compare_and_write": false, 00:08:13.515 "abort": true, 00:08:13.515 "seek_hole": false, 00:08:13.515 "seek_data": false, 00:08:13.515 "copy": true, 00:08:13.515 "nvme_iov_md": false 00:08:13.515 }, 00:08:13.515 "driver_specific": { 00:08:13.515 "gpt": { 00:08:13.515 "base_bdev": "Nvme1n1", 00:08:13.515 "offset_blocks": 256, 00:08:13.515 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:13.515 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:13.515 "partition_name": "SPDK_TEST_first" 00:08:13.515 } 00:08:13.515 } 00:08:13.515 } 00:08:13.515 ]' 00:08:13.515 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:13.776 { 00:08:13.776 "name": "Nvme1n1p2", 00:08:13.776 "aliases": [ 00:08:13.776 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:13.776 ], 00:08:13.776 "product_name": "GPT Disk", 00:08:13.776 "block_size": 4096, 00:08:13.776 "num_blocks": 655103, 00:08:13.776 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:13.776 "assigned_rate_limits": { 00:08:13.776 "rw_ios_per_sec": 0, 00:08:13.776 "rw_mbytes_per_sec": 0, 00:08:13.776 "r_mbytes_per_sec": 0, 00:08:13.776 "w_mbytes_per_sec": 0 00:08:13.776 }, 00:08:13.776 "claimed": false, 00:08:13.776 "zoned": false, 00:08:13.776 "supported_io_types": { 00:08:13.776 "read": true, 00:08:13.776 "write": true, 00:08:13.776 "unmap": true, 00:08:13.776 "flush": true, 00:08:13.776 "reset": true, 00:08:13.776 "nvme_admin": false, 00:08:13.776 "nvme_io": false, 00:08:13.776 "nvme_io_md": false, 00:08:13.776 "write_zeroes": true, 00:08:13.776 "zcopy": false, 00:08:13.776 "get_zone_info": false, 00:08:13.776 "zone_management": false, 00:08:13.776 "zone_append": false, 00:08:13.776 "compare": true, 00:08:13.776 "compare_and_write": false, 00:08:13.776 "abort": true, 00:08:13.776 "seek_hole": false, 00:08:13.776 "seek_data": false, 00:08:13.776 "copy": true, 00:08:13.776 "nvme_iov_md": false 00:08:13.776 }, 00:08:13.776 "driver_specific": { 00:08:13.776 "gpt": { 00:08:13.776 "base_bdev": "Nvme1n1", 00:08:13.776 "offset_blocks": 655360, 00:08:13.776 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:13.776 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:13.776 "partition_name": "SPDK_TEST_second" 00:08:13.776 } 00:08:13.776 } 00:08:13.776 } 00:08:13.776 ]' 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:13.776 10:02:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74074 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74074 ']' 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74074 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74074 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74074' 00:08:13.776 killing process with pid 74074 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74074 00:08:13.776 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74074 00:08:14.396 00:08:14.396 real 0m1.814s 00:08:14.396 user 0m1.915s 00:08:14.396 sys 0m0.359s 00:08:14.396 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.396 ************************************ 00:08:14.396 END TEST bdev_gpt_uuid 00:08:14.396 ************************************ 00:08:14.396 10:02:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:14.396 10:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:14.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:14.677 Waiting for block devices as requested 00:08:14.677 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.938 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.938 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:15.201 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:20.488 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:20.489 10:02:48 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:20.489 10:02:48 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:20.489 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:20.489 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:20.489 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:20.489 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:20.489 10:02:48 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:20.489 00:08:20.489 real 0m48.697s 00:08:20.489 user 1m2.469s 00:08:20.489 sys 0m7.373s 00:08:20.489 10:02:48 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.489 ************************************ 00:08:20.489 10:02:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.489 END TEST blockdev_nvme_gpt 00:08:20.489 ************************************ 00:08:20.489 10:02:48 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:20.489 10:02:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.489 10:02:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.489 10:02:48 -- common/autotest_common.sh@10 -- # set +x 00:08:20.489 ************************************ 00:08:20.489 START TEST nvme 00:08:20.489 ************************************ 00:08:20.489 10:02:48 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:20.489 * Looking for test storage... 00:08:20.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:20.489 10:02:48 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:20.489 10:02:48 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:20.489 10:02:48 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:20.750 10:02:48 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:20.750 10:02:48 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.750 10:02:48 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.750 10:02:48 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.750 10:02:48 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.750 10:02:48 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.750 10:02:48 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.750 10:02:48 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.750 10:02:48 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.750 10:02:48 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.750 10:02:48 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.750 10:02:48 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.750 10:02:48 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:20.750 10:02:48 nvme -- scripts/common.sh@345 -- # : 1 00:08:20.750 10:02:48 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.750 10:02:48 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.750 10:02:48 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:20.750 10:02:48 nvme -- scripts/common.sh@353 -- # local d=1 00:08:20.750 10:02:48 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.750 10:02:48 nvme -- scripts/common.sh@355 -- # echo 1 00:08:20.750 10:02:48 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.750 10:02:48 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:20.750 10:02:48 nvme -- scripts/common.sh@353 -- # local d=2 00:08:20.750 10:02:48 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.750 10:02:48 nvme -- scripts/common.sh@355 -- # echo 2 00:08:20.750 10:02:48 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.750 10:02:48 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.750 10:02:48 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.750 10:02:48 nvme -- scripts/common.sh@368 -- # return 0 00:08:20.750 10:02:48 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.750 10:02:48 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:20.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.750 --rc genhtml_branch_coverage=1 00:08:20.750 --rc genhtml_function_coverage=1 00:08:20.750 --rc genhtml_legend=1 00:08:20.750 --rc geninfo_all_blocks=1 00:08:20.750 --rc geninfo_unexecuted_blocks=1 00:08:20.750 00:08:20.750 ' 00:08:20.750 10:02:48 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:20.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.750 --rc genhtml_branch_coverage=1 00:08:20.750 --rc genhtml_function_coverage=1 00:08:20.750 --rc genhtml_legend=1 00:08:20.750 --rc geninfo_all_blocks=1 00:08:20.750 --rc geninfo_unexecuted_blocks=1 00:08:20.750 00:08:20.750 ' 00:08:20.750 10:02:48 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:20.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.750 --rc genhtml_branch_coverage=1 00:08:20.750 --rc genhtml_function_coverage=1 00:08:20.750 --rc genhtml_legend=1 00:08:20.750 --rc geninfo_all_blocks=1 00:08:20.750 --rc geninfo_unexecuted_blocks=1 00:08:20.750 00:08:20.750 ' 00:08:20.750 10:02:48 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:20.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.750 --rc genhtml_branch_coverage=1 00:08:20.750 --rc genhtml_function_coverage=1 00:08:20.750 --rc genhtml_legend=1 00:08:20.750 --rc geninfo_all_blocks=1 00:08:20.750 --rc geninfo_unexecuted_blocks=1 00:08:20.750 00:08:20.750 ' 00:08:20.750 10:02:48 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:21.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:21.585 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.585 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.846 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.846 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.846 10:02:50 nvme -- nvme/nvme.sh@79 -- # uname 00:08:21.846 10:02:50 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:21.846 10:02:50 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:21.846 10:02:50 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:21.846 Waiting for stub to ready for secondary processes... 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1071 -- # stubpid=74704 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/74704 ]] 00:08:21.846 10:02:50 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:21.846 [2024-11-03 10:02:50.098844] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:21.846 [2024-11-03 10:02:50.098994] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:22.790 10:02:51 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:22.790 10:02:51 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/74704 ]] 00:08:22.790 10:02:51 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:23.052 [2024-11-03 10:02:51.181623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.052 [2024-11-03 10:02:51.211604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.052 [2024-11-03 10:02:51.211984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.052 [2024-11-03 10:02:51.211943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.052 [2024-11-03 10:02:51.225100] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:23.052 [2024-11-03 10:02:51.225157] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:23.052 [2024-11-03 10:02:51.241106] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:23.052 [2024-11-03 10:02:51.241394] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:23.052 [2024-11-03 10:02:51.243987] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:23.052 [2024-11-03 10:02:51.244354] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:23.052 [2024-11-03 10:02:51.244475] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:23.052 [2024-11-03 10:02:51.245943] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:23.052 [2024-11-03 10:02:51.246303] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:23.052 [2024-11-03 10:02:51.246393] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:23.052 [2024-11-03 10:02:51.248466] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:23.052 [2024-11-03 10:02:51.248799] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:23.052 [2024-11-03 10:02:51.248900] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:23.052 [2024-11-03 10:02:51.248970] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:23.052 [2024-11-03 10:02:51.249052] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:23.999 10:02:52 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:23.999 done. 00:08:23.999 10:02:52 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:23.999 10:02:52 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:23.999 10:02:52 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:23.999 10:02:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.999 10:02:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.999 ************************************ 00:08:23.999 START TEST nvme_reset 00:08:23.999 ************************************ 00:08:23.999 10:02:52 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:23.999 Initializing NVMe Controllers 00:08:23.999 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:23.999 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:23.999 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:23.999 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:23.999 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:23.999 00:08:23.999 real 0m0.189s 00:08:23.999 user 0m0.055s 00:08:23.999 sys 0m0.085s 00:08:23.999 10:02:52 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.999 ************************************ 00:08:23.999 END TEST nvme_reset 00:08:23.999 10:02:52 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:23.999 ************************************ 00:08:23.999 10:02:52 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:23.999 10:02:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.999 10:02:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.999 10:02:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.999 ************************************ 00:08:23.999 START TEST nvme_identify 00:08:23.999 ************************************ 00:08:23.999 10:02:52 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:23.999 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:23.999 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:23.999 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:23.999 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:23.999 10:02:52 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:23.999 10:02:52 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:23.999 10:02:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:23.999 10:02:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:23.999 10:02:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:24.266 10:02:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:24.266 10:02:52 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:24.266 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:24.266 [2024-11-03 10:02:52.557757] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 74737 terminated unexpected 00:08:24.266 ===================================================== 00:08:24.266 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.266 ===================================================== 00:08:24.266 Controller Capabilities/Features 00:08:24.266 ================================ 00:08:24.266 Vendor ID: 1b36 00:08:24.266 Subsystem Vendor ID: 1af4 00:08:24.266 Serial Number: 12341 00:08:24.266 Model Number: QEMU NVMe Ctrl 00:08:24.266 Firmware Version: 8.0.0 00:08:24.266 Recommended Arb Burst: 6 00:08:24.266 IEEE OUI Identifier: 00 54 52 00:08:24.266 Multi-path I/O 00:08:24.266 May have multiple subsystem ports: No 00:08:24.266 May have multiple controllers: No 00:08:24.266 Associated with SR-IOV VF: No 00:08:24.266 Max Data Transfer Size: 524288 00:08:24.266 Max Number of Namespaces: 256 00:08:24.266 Max Number of I/O Queues: 64 00:08:24.266 NVMe Specification Version (VS): 1.4 00:08:24.266 NVMe Specification Version (Identify): 1.4 00:08:24.266 Maximum Queue Entries: 2048 00:08:24.266 Contiguous Queues Required: Yes 00:08:24.266 Arbitration Mechanisms Supported 00:08:24.266 Weighted Round Robin: Not Supported 00:08:24.266 Vendor Specific: Not Supported 00:08:24.266 Reset Timeout: 7500 ms 00:08:24.266 Doorbell Stride: 4 bytes 00:08:24.266 NVM Subsystem Reset: Not Supported 00:08:24.266 Command Sets Supported 00:08:24.266 NVM Command Set: Supported 00:08:24.266 Boot Partition: Not Supported 00:08:24.266 Memory Page Size Minimum: 4096 bytes 00:08:24.266 Memory Page Size Maximum: 65536 bytes 00:08:24.266 Persistent Memory Region: Not Supported 00:08:24.266 Optional Asynchronous Events Supported 00:08:24.266 Namespace Attribute Notices: Supported 00:08:24.266 Firmware Activation Notices: Not Supported 00:08:24.266 ANA Change Notices: Not Supported 00:08:24.266 PLE Aggregate Log Change Notices: Not Supported 00:08:24.266 LBA Status Info Alert Notices: Not Supported 00:08:24.266 EGE Aggregate Log Change Notices: Not Supported 00:08:24.266 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.266 Zone Descriptor Change Notices: Not Supported 00:08:24.266 Discovery Log Change Notices: Not Supported 00:08:24.266 Controller Attributes 00:08:24.266 128-bit Host Identifier: Not Supported 00:08:24.266 Non-Operational Permissive Mode: Not Supported 00:08:24.266 NVM Sets: Not Supported 00:08:24.266 Read Recovery Levels: Not Supported 00:08:24.266 Endurance Groups: Not Supported 00:08:24.266 Predictable Latency Mode: Not Supported 00:08:24.266 Traffic Based Keep ALive: Not Supported 00:08:24.266 Namespace Granularity: Not Supported 00:08:24.266 SQ Associations: Not Supported 00:08:24.266 UUID List: Not Supported 00:08:24.266 Multi-Domain Subsystem: Not Supported 00:08:24.266 Fixed Capacity Management: Not Supported 00:08:24.266 Variable Capacity Management: Not Supported 00:08:24.266 Delete Endurance Group: Not Supported 00:08:24.266 Delete NVM Set: Not Supported 00:08:24.266 Extended LBA Formats Supported: Supported 00:08:24.266 Flexible Data Placement Supported: Not Supported 00:08:24.266 00:08:24.266 Controller Memory Buffer Support 00:08:24.266 ================================ 00:08:24.266 Supported: No 00:08:24.266 00:08:24.266 Persistent Memory Region Support 00:08:24.266 ================================ 00:08:24.266 Supported: No 00:08:24.266 00:08:24.266 Admin Command Set Attributes 00:08:24.266 ============================ 00:08:24.266 Security Send/Receive: Not Supported 00:08:24.266 Format NVM: Supported 00:08:24.266 Firmware Activate/Download: Not Supported 00:08:24.266 Namespace Management: Supported 00:08:24.266 Device Self-Test: Not Supported 00:08:24.266 Directives: Supported 00:08:24.266 NVMe-MI: Not Supported 00:08:24.266 Virtualization Management: Not Supported 00:08:24.266 Doorbell Buffer Config: Supported 00:08:24.266 Get LBA Status Capability: Not Supported 00:08:24.266 Command & Feature Lockdown Capability: Not Supported 00:08:24.266 Abort Command Limit: 4 00:08:24.266 Async Event Request Limit: 4 00:08:24.266 Number of Firmware Slots: N/A 00:08:24.267 Firmware Slot 1 Read-Only: N/A 00:08:24.267 Firmware Activation Without Reset: N/A 00:08:24.267 Multiple Update Detection Support: N/A 00:08:24.267 Firmware Update Granularity: No Information Provided 00:08:24.267 Per-Namespace SMART Log: Yes 00:08:24.267 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.267 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:24.267 Command Effects Log Page: Supported 00:08:24.267 Get Log Page Extended Data: Supported 00:08:24.267 Telemetry Log Pages: Not Supported 00:08:24.267 Persistent Event Log Pages: Not Supported 00:08:24.267 Supported Log Pages Log Page: May Support 00:08:24.267 Commands Supported & Effects Log Page: Not Supported 00:08:24.267 Feature Identifiers & Effects Log Page:May Support 00:08:24.267 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.267 Data Area 4 for Telemetry Log: Not Supported 00:08:24.267 Error Log Page Entries Supported: 1 00:08:24.267 Keep Alive: Not Supported 00:08:24.267 00:08:24.267 NVM Command Set Attributes 00:08:24.267 ========================== 00:08:24.267 Submission Queue Entry Size 00:08:24.267 Max: 64 00:08:24.267 Min: 64 00:08:24.267 Completion Queue Entry Size 00:08:24.267 Max: 16 00:08:24.267 Min: 16 00:08:24.267 Number of Namespaces: 256 00:08:24.267 Compare Command: Supported 00:08:24.267 Write Uncorrectable Command: Not Supported 00:08:24.267 Dataset Management Command: Supported 00:08:24.267 Write Zeroes Command: Supported 00:08:24.267 Set Features Save Field: Supported 00:08:24.267 Reservations: Not Supported 00:08:24.267 Timestamp: Supported 00:08:24.267 Copy: Supported 00:08:24.267 Volatile Write Cache: Present 00:08:24.267 Atomic Write Unit (Normal): 1 00:08:24.267 Atomic Write Unit (PFail): 1 00:08:24.267 Atomic Compare & Write Unit: 1 00:08:24.267 Fused Compare & Write: Not Supported 00:08:24.267 Scatter-Gather List 00:08:24.267 SGL Command Set: Supported 00:08:24.267 SGL Keyed: Not Supported 00:08:24.267 SGL Bit Bucket Descriptor: Not Supported 00:08:24.267 SGL Metadata Pointer: Not Supported 00:08:24.267 Oversized SGL: Not Supported 00:08:24.267 SGL Metadata Address: Not Supported 00:08:24.267 SGL Offset: Not Supported 00:08:24.267 Transport SGL Data Block: Not Supported 00:08:24.267 Replay Protected Memory Block: Not Supported 00:08:24.267 00:08:24.267 Firmware Slot Information 00:08:24.267 ========================= 00:08:24.267 Active slot: 1 00:08:24.267 Slot 1 Firmware Revision: 1.0 00:08:24.267 00:08:24.267 00:08:24.267 Commands Supported and Effects 00:08:24.267 ============================== 00:08:24.267 Admin Commands 00:08:24.267 -------------- 00:08:24.267 Delete I/O Submission Queue (00h): Supported 00:08:24.267 Create I/O Submission Queue (01h): Supported 00:08:24.267 Get Log Page (02h): Supported 00:08:24.267 Delete I/O Completion Queue (04h): Supported 00:08:24.267 Create I/O Completion Queue (05h): Supported 00:08:24.267 Identify (06h): Supported 00:08:24.267 Abort (08h): Supported 00:08:24.267 Set Features (09h): Supported 00:08:24.267 Get Features (0Ah): Supported 00:08:24.267 Asynchronous Event Request (0Ch): Supported 00:08:24.267 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.267 Directive Send (19h): Supported 00:08:24.267 Directive Receive (1Ah): Supported 00:08:24.267 Virtualization Management (1Ch): Supported 00:08:24.267 Doorbell Buffer Config (7Ch): Supported 00:08:24.267 Format NVM (80h): Supported LBA-Change 00:08:24.267 I/O Commands 00:08:24.267 ------------ 00:08:24.267 Flush (00h): Supported LBA-Change 00:08:24.267 Write (01h): Supported LBA-Change 00:08:24.267 Read (02h): Supported 00:08:24.267 Compare (05h): Supported 00:08:24.267 Write Zeroes (08h): Supported LBA-Change 00:08:24.267 Dataset Management (09h): Supported LBA-Change 00:08:24.267 Unknown (0Ch): Supported 00:08:24.267 Unknown (12h): Supported 00:08:24.267 Copy (19h): Supported LBA-Change 00:08:24.267 Unknown (1Dh): Supported LBA-Change 00:08:24.267 00:08:24.267 Error Log 00:08:24.267 ========= 00:08:24.267 00:08:24.267 Arbitration 00:08:24.267 =========== 00:08:24.267 Arbitration Burst: no limit 00:08:24.267 00:08:24.267 Power Management 00:08:24.267 ================ 00:08:24.267 Number of Power States: 1 00:08:24.267 Current Power State: Power State #0 00:08:24.267 Power State #0: 00:08:24.267 Max Power: 25.00 W 00:08:24.267 Non-Operational State: Operational 00:08:24.267 Entry Latency: 16 microseconds 00:08:24.267 Exit Latency: 4 microseconds 00:08:24.267 Relative Read Throughput: 0 00:08:24.267 Relative Read Latency: 0 00:08:24.267 Relative Write Throughput: 0 00:08:24.267 Relative Write Latency: 0 00:08:24.267 Idle Power[2024-11-03 10:02:52.559743] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 74737 terminated unexpected 00:08:24.267 : Not Reported 00:08:24.267 Active Power: Not Reported 00:08:24.267 Non-Operational Permissive Mode: Not Supported 00:08:24.267 00:08:24.267 Health Information 00:08:24.267 ================== 00:08:24.267 Critical Warnings: 00:08:24.267 Available Spare Space: OK 00:08:24.267 Temperature: OK 00:08:24.267 Device Reliability: OK 00:08:24.267 Read Only: No 00:08:24.267 Volatile Memory Backup: OK 00:08:24.267 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.267 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.267 Available Spare: 0% 00:08:24.267 Available Spare Threshold: 0% 00:08:24.267 Life Percentage Used: 0% 00:08:24.267 Data Units Read: 1085 00:08:24.267 Data Units Written: 958 00:08:24.267 Host Read Commands: 55574 00:08:24.267 Host Write Commands: 54475 00:08:24.267 Controller Busy Time: 0 minutes 00:08:24.267 Power Cycles: 0 00:08:24.267 Power On Hours: 0 hours 00:08:24.267 Unsafe Shutdowns: 0 00:08:24.267 Unrecoverable Media Errors: 0 00:08:24.267 Lifetime Error Log Entries: 0 00:08:24.267 Warning Temperature Time: 0 minutes 00:08:24.267 Critical Temperature Time: 0 minutes 00:08:24.267 00:08:24.267 Number of Queues 00:08:24.267 ================ 00:08:24.267 Number of I/O Submission Queues: 64 00:08:24.267 Number of I/O Completion Queues: 64 00:08:24.267 00:08:24.267 ZNS Specific Controller Data 00:08:24.267 ============================ 00:08:24.267 Zone Append Size Limit: 0 00:08:24.267 00:08:24.267 00:08:24.267 Active Namespaces 00:08:24.267 ================= 00:08:24.267 Namespace ID:1 00:08:24.267 Error Recovery Timeout: Unlimited 00:08:24.267 Command Set Identifier: NVM (00h) 00:08:24.267 Deallocate: Supported 00:08:24.267 Deallocated/Unwritten Error: Supported 00:08:24.267 Deallocated Read Value: All 0x00 00:08:24.267 Deallocate in Write Zeroes: Not Supported 00:08:24.267 Deallocated Guard Field: 0xFFFF 00:08:24.267 Flush: Supported 00:08:24.267 Reservation: Not Supported 00:08:24.267 Namespace Sharing Capabilities: Private 00:08:24.267 Size (in LBAs): 1310720 (5GiB) 00:08:24.267 Capacity (in LBAs): 1310720 (5GiB) 00:08:24.267 Utilization (in LBAs): 1310720 (5GiB) 00:08:24.267 Thin Provisioning: Not Supported 00:08:24.267 Per-NS Atomic Units: No 00:08:24.267 Maximum Single Source Range Length: 128 00:08:24.267 Maximum Copy Length: 128 00:08:24.267 Maximum Source Range Count: 128 00:08:24.267 NGUID/EUI64 Never Reused: No 00:08:24.268 Namespace Write Protected: No 00:08:24.268 Number of LBA Formats: 8 00:08:24.268 Current LBA Format: LBA Format #04 00:08:24.268 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.268 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.268 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.268 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.268 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.268 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.268 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.268 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.268 00:08:24.268 NVM Specific Namespace Data 00:08:24.268 =========================== 00:08:24.268 Logical Block Storage Tag Mask: 0 00:08:24.268 Protection Information Capabilities: 00:08:24.268 16b Guard Protection Information Storage Tag Support: No 00:08:24.268 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.268 Storage Tag Check Read Support: No 00:08:24.268 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.268 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.268 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.268 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.268 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.268 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.268 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.268 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.268 ===================================================== 00:08:24.268 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.268 ===================================================== 00:08:24.268 Controller Capabilities/Features 00:08:24.268 ================================ 00:08:24.268 Vendor ID: 1b36 00:08:24.268 Subsystem Vendor ID: 1af4 00:08:24.268 Serial Number: 12343 00:08:24.268 Model Number: QEMU NVMe Ctrl 00:08:24.268 Firmware Version: 8.0.0 00:08:24.268 Recommended Arb Burst: 6 00:08:24.268 IEEE OUI Identifier: 00 54 52 00:08:24.268 Multi-path I/O 00:08:24.268 May have multiple subsystem ports: No 00:08:24.268 May have multiple controllers: Yes 00:08:24.268 Associated with SR-IOV VF: No 00:08:24.268 Max Data Transfer Size: 524288 00:08:24.268 Max Number of Namespaces: 256 00:08:24.268 Max Number of I/O Queues: 64 00:08:24.268 NVMe Specification Version (VS): 1.4 00:08:24.268 NVMe Specification Version (Identify): 1.4 00:08:24.268 Maximum Queue Entries: 2048 00:08:24.268 Contiguous Queues Required: Yes 00:08:24.268 Arbitration Mechanisms Supported 00:08:24.268 Weighted Round Robin: Not Supported 00:08:24.268 Vendor Specific: Not Supported 00:08:24.268 Reset Timeout: 7500 ms 00:08:24.268 Doorbell Stride: 4 bytes 00:08:24.268 NVM Subsystem Reset: Not Supported 00:08:24.268 Command Sets Supported 00:08:24.268 NVM Command Set: Supported 00:08:24.268 Boot Partition: Not Supported 00:08:24.268 Memory Page Size Minimum: 4096 bytes 00:08:24.268 Memory Page Size Maximum: 65536 bytes 00:08:24.268 Persistent Memory Region: Not Supported 00:08:24.268 Optional Asynchronous Events Supported 00:08:24.268 Namespace Attribute Notices: Supported 00:08:24.268 Firmware Activation Notices: Not Supported 00:08:24.268 ANA Change Notices: Not Supported 00:08:24.268 PLE Aggregate Log Change Notices: Not Supported 00:08:24.268 LBA Status Info Alert Notices: Not Supported 00:08:24.268 EGE Aggregate Log Change Notices: Not Supported 00:08:24.268 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.268 Zone Descriptor Change Notices: Not Supported 00:08:24.268 Discovery Log Change Notices: Not Supported 00:08:24.268 Controller Attributes 00:08:24.268 128-bit Host Identifier: Not Supported 00:08:24.268 Non-Operational Permissive Mode: Not Supported 00:08:24.268 NVM Sets: Not Supported 00:08:24.268 Read Recovery Levels: Not Supported 00:08:24.268 Endurance Groups: Supported 00:08:24.268 Predictable Latency Mode: Not Supported 00:08:24.268 Traffic Based Keep ALive: Not Supported 00:08:24.268 Namespace Granularity: Not Supported 00:08:24.268 SQ Associations: Not Supported 00:08:24.268 UUID List: Not Supported 00:08:24.268 Multi-Domain Subsystem: Not Supported 00:08:24.268 Fixed Capacity Management: Not Supported 00:08:24.268 Variable Capacity Management: Not Supported 00:08:24.268 Delete Endurance Group: Not Supported 00:08:24.268 Delete NVM Set: Not Supported 00:08:24.268 Extended LBA Formats Supported: Supported 00:08:24.268 Flexible Data Placement Supported: Supported 00:08:24.268 00:08:24.268 Controller Memory Buffer Support 00:08:24.268 ================================ 00:08:24.268 Supported: No 00:08:24.268 00:08:24.268 Persistent Memory Region Support 00:08:24.268 ================================ 00:08:24.268 Supported: No 00:08:24.268 00:08:24.268 Admin Command Set Attributes 00:08:24.268 ============================ 00:08:24.268 Security Send/Receive: Not Supported 00:08:24.268 Format NVM: Supported 00:08:24.268 Firmware Activate/Download: Not Supported 00:08:24.268 Namespace Management: Supported 00:08:24.268 Device Self-Test: Not Supported 00:08:24.268 Directives: Supported 00:08:24.268 NVMe-MI: Not Supported 00:08:24.268 Virtualization Management: Not Supported 00:08:24.268 Doorbell Buffer Config: Supported 00:08:24.268 Get LBA Status Capability: Not Supported 00:08:24.268 Command & Feature Lockdown Capability: Not Supported 00:08:24.268 Abort Command Limit: 4 00:08:24.268 Async Event Request Limit: 4 00:08:24.268 Number of Firmware Slots: N/A 00:08:24.268 Firmware Slot 1 Read-Only: N/A 00:08:24.268 Firmware Activation Without Reset: N/A 00:08:24.268 Multiple Update Detection Support: N/A 00:08:24.268 Firmware Update Granularity: No Information Provided 00:08:24.268 Per-Namespace SMART Log: Yes 00:08:24.268 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.268 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:24.268 Command Effects Log Page: Supported 00:08:24.268 Get Log Page Extended Data: Supported 00:08:24.268 Telemetry Log Pages: Not Supported 00:08:24.268 Persistent Event Log Pages: Not Supported 00:08:24.268 Supported Log Pages Log Page: May Support 00:08:24.268 Commands Supported & Effects Log Page: Not Supported 00:08:24.268 Feature Identifiers & Effects Log Page:May Support 00:08:24.268 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.268 Data Area 4 for Telemetry Log: Not Supported 00:08:24.268 Error Log Page Entries Supported: 1 00:08:24.268 Keep Alive: Not Supported 00:08:24.268 00:08:24.268 NVM Command Set Attributes 00:08:24.268 ========================== 00:08:24.268 Submission Queue Entry Size 00:08:24.268 Max: 64 00:08:24.268 Min: 64 00:08:24.268 Completion Queue Entry Size 00:08:24.268 Max: 16 00:08:24.268 Min: 16 00:08:24.268 Number of Namespaces: 256 00:08:24.268 Compare Command: Supported 00:08:24.268 Write Uncorrectable Command: Not Supported 00:08:24.268 Dataset Management Command: Supported 00:08:24.268 Write Zeroes Command: Supported 00:08:24.268 Set Features Save Field: Supported 00:08:24.268 Reservations: Not Supported 00:08:24.268 Timestamp: Supported 00:08:24.268 Copy: Supported 00:08:24.268 Volatile Write Cache: Present 00:08:24.268 Atomic Write Unit (Normal): 1 00:08:24.269 Atomic Write Unit (PFail): 1 00:08:24.269 Atomic Compare & Write Unit: 1 00:08:24.269 Fused Compare & Write: Not Supported 00:08:24.269 Scatter-Gather List 00:08:24.269 SGL Command Set: Supported 00:08:24.269 SGL Keyed: Not Supported 00:08:24.269 SGL Bit Bucket Descriptor: Not Supported 00:08:24.269 SGL Metadata Pointer: Not Supported 00:08:24.269 Oversized SGL: Not Supported 00:08:24.269 SGL Metadata Address: Not Supported 00:08:24.269 SGL Offset: Not Supported 00:08:24.269 Transport SGL Data Block: Not Supported 00:08:24.269 Replay Protected Memory Block: Not Supported 00:08:24.269 00:08:24.269 Firmware Slot Information 00:08:24.269 ========================= 00:08:24.269 Active slot: 1 00:08:24.269 Slot 1 Firmware Revision: 1.0 00:08:24.269 00:08:24.269 00:08:24.269 Commands Supported and Effects 00:08:24.269 ============================== 00:08:24.269 Admin Commands 00:08:24.269 -------------- 00:08:24.269 Delete I/O Submission Queue (00h): Supported 00:08:24.269 Create I/O Submission Queue (01h): Supported 00:08:24.269 Get Log Page (02h): Supported 00:08:24.269 Delete I/O Completion Queue (04h): Supported 00:08:24.269 Create I/O Completion Queue (05h): Supported 00:08:24.269 Identify (06h): Supported 00:08:24.269 Abort (08h): Supported 00:08:24.269 Set Features (09h): Supported 00:08:24.269 Get Features (0Ah): Supported 00:08:24.269 Asynchronous Event Request (0Ch): Supported 00:08:24.269 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.269 Directive Send (19h): Supported 00:08:24.269 Directive Receive (1Ah): Supported 00:08:24.269 Virtualization Management (1Ch): Supported 00:08:24.269 Doorbell Buffer Config (7Ch): Supported 00:08:24.269 Format NVM (80h): Supported LBA-Change 00:08:24.269 I/O Commands 00:08:24.269 ------------ 00:08:24.269 Flush (00h): Supported LBA-Change 00:08:24.269 Write (01h): Supported LBA-Change 00:08:24.269 Read (02h): Supported 00:08:24.269 Compare (05h): Supported 00:08:24.269 Write Zeroes (08h): Supported LBA-Change 00:08:24.269 Dataset Management (09h): Supported LBA-Change 00:08:24.269 Unknown (0Ch): Supported 00:08:24.269 Unknown (12h): Supported 00:08:24.269 Copy (19h): Supported LBA-Change 00:08:24.269 Unknown (1Dh): Supported LBA-Change 00:08:24.269 00:08:24.269 Error Log 00:08:24.269 ========= 00:08:24.269 00:08:24.269 Arbitration 00:08:24.269 =========== 00:08:24.269 Arbitration Burst: no limit 00:08:24.269 00:08:24.269 Power Management 00:08:24.269 ================ 00:08:24.269 Number of Power States: 1 00:08:24.269 Current Power State: Power State #0 00:08:24.269 Power State #0: 00:08:24.269 Max Power: 25.00 W 00:08:24.269 Non-Operational State: Operational 00:08:24.269 Entry Latency: 16 microseconds 00:08:24.269 Exit Latency: 4 microseconds 00:08:24.269 Relative Read Throughput: 0 00:08:24.269 Relative Read Latency: 0 00:08:24.269 Relative Write Throughput: 0 00:08:24.269 Relative Write Latency: 0 00:08:24.269 Idle Power: Not Reported 00:08:24.269 Active Power: Not Reported 00:08:24.269 Non-Operational Permissive Mode: Not Supported 00:08:24.269 00:08:24.269 Health Information 00:08:24.269 ================== 00:08:24.269 Critical Warnings: 00:08:24.269 Available Spare Space: OK 00:08:24.269 Temperature: OK 00:08:24.269 Device Reliability: OK 00:08:24.269 Read Only: No 00:08:24.269 Volatile Memory Backup: OK 00:08:24.269 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.269 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.269 Available Spare: 0% 00:08:24.269 Available Spare Threshold: 0% 00:08:24.269 Life Percentage Used: 0% 00:08:24.269 Data Units Read: 862 00:08:24.269 Data Units Written: 791 00:08:24.269 Host Read Commands: 38435 00:08:24.269 Host Write Commands: 37858 00:08:24.269 Controller Busy Time: 0 minutes 00:08:24.269 Power Cycles: 0 00:08:24.269 Power On Hours: 0 hours 00:08:24.269 Unsafe Shutdowns: 0 00:08:24.269 Unrecoverable Media Errors: 0 00:08:24.269 Lifetime Error Log Entries: 0 00:08:24.269 Warning Temperature Time: 0 minutes 00:08:24.269 Critical Temperature Time: 0 minutes 00:08:24.269 00:08:24.269 Number of Queues 00:08:24.269 ================ 00:08:24.269 Number of I/O Submission Queues: 64 00:08:24.269 Number of I/O Completion Queues: 64 00:08:24.269 00:08:24.269 ZNS Specific Controller Data 00:08:24.269 ============================ 00:08:24.269 Zone Append Size Limit: 0 00:08:24.269 00:08:24.269 00:08:24.269 Active Namespaces 00:08:24.269 ================= 00:08:24.269 Namespace ID:1 00:08:24.269 Error Recovery Timeout: Unlimited 00:08:24.269 Command Set Identifier: NVM (00h) 00:08:24.269 Deallocate: Supported 00:08:24.269 Deallocated/Unwritten Error: Supported 00:08:24.269 Deallocated Read Value: All 0x00 00:08:24.269 Deallocate in Write Zeroes: Not Supported 00:08:24.269 Deallocated Guard Field: 0xFFFF 00:08:24.269 Flush: Supported 00:08:24.269 Reservation: Not Supported 00:08:24.269 Namespace Sharing Capabilities: Multiple Controllers 00:08:24.269 Size (in LBAs): 262144 (1GiB) 00:08:24.269 Capacity (in LBAs): 262144 (1GiB) 00:08:24.269 Utilization (in LBAs): 262144 (1GiB) 00:08:24.269 Thin Provisioning: Not Supported 00:08:24.269 Per-NS Atomic Units: No 00:08:24.269 Maximum Single Source Range Length: 128 00:08:24.269 Maximum Copy Length: 128 00:08:24.269 Maximum Source Range Count: 128 00:08:24.269 NGUID/EUI64 Never Reused: No 00:08:24.269 Namespace Write Protected: No 00:08:24.269 Endurance group ID: 1 00:08:24.269 Number of LBA Formats: 8 00:08:24.269 Current LBA Format: LBA Format #04 00:08:24.269 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.269 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.269 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.269 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.269 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.269 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.269 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.269 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.269 00:08:24.269 Get Feature FDP: 00:08:24.269 ================ 00:08:24.269 Enabled: Yes 00:08:24.269 FDP configuration index: 0 00:08:24.269 00:08:24.269 FDP configurations log page 00:08:24.269 =========================== 00:08:24.269 Number of FDP configurations: 1 00:08:24.269 Version: 0 00:08:24.269 Size: 112 00:08:24.269 FDP Configuration Descriptor: 0 00:08:24.269 Descriptor Size: 96 00:08:24.269 Reclaim Group Identifier format: 2 00:08:24.269 FDP Volatile Write Cache: Not Present 00:08:24.269 FDP Configuration: Valid 00:08:24.269 Vendor Specific Size: 0 00:08:24.269 Number of Reclaim Groups: 2 00:08:24.269 Number of Recalim Unit Handles: 8 00:08:24.269 Max Placement Identifiers: 128 00:08:24.269 Number of Namespaces Suppprted: 256 00:08:24.269 Reclaim unit Nominal Size: 6000000 bytes 00:08:24.269 Estimated Reclaim Unit Time Limit: Not Reported 00:08:24.269 RUH Desc #000: RUH Type: Initially Isolated 00:08:24.269 RUH Desc #001: RUH Type: Initially Isolated 00:08:24.269 RUH Desc #002: RUH Type: Initially Isolated 00:08:24.270 RUH Desc #003: RUH Type: Initially Isolated 00:08:24.270 RUH Desc #004: RUH Type: Initially Isolated 00:08:24.270 RUH Desc #005: RUH Type: Initially Isolated 00:08:24.270 RUH Desc #006: RUH Type: Initially Isolated 00:08:24.270 RUH Desc #007: RUH Type: Initially Isolated 00:08:24.270 00:08:24.270 FDP reclaim unit handle usage log page 00:08:24.270 ====================================== 00:08:24.270 Number of Reclaim Unit Handles: 8 00:08:24.270 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:24.270 RUH Usage Desc #001: RUH Attributes: Unused 00:08:24.270 RUH Usage Desc #002: RUH Attributes: Unused 00:08:24.270 RUH Usage Desc #003: RUH Attributes: Unused 00:08:24.270 RUH Usage Desc #004: RUH Attributes: Unused 00:08:24.270 R[2024-11-03 10:02:52.562832] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 74737 terminated unexpected 00:08:24.270 UH Usage Desc #005: RUH Attributes: Unused 00:08:24.270 RUH Usage Desc #006: RUH Attributes: Unused 00:08:24.270 RUH Usage Desc #007: RUH Attributes: Unused 00:08:24.270 00:08:24.270 FDP statistics log page 00:08:24.270 ======================= 00:08:24.270 Host bytes with metadata written: 504078336 00:08:24.270 Media bytes with metadata written: 504135680 00:08:24.270 Media bytes erased: 0 00:08:24.270 00:08:24.270 FDP events log page 00:08:24.270 =================== 00:08:24.270 Number of FDP events: 0 00:08:24.270 00:08:24.270 NVM Specific Namespace Data 00:08:24.270 =========================== 00:08:24.270 Logical Block Storage Tag Mask: 0 00:08:24.270 Protection Information Capabilities: 00:08:24.270 16b Guard Protection Information Storage Tag Support: No 00:08:24.270 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.270 Storage Tag Check Read Support: No 00:08:24.270 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.270 ===================================================== 00:08:24.270 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.270 ===================================================== 00:08:24.270 Controller Capabilities/Features 00:08:24.270 ================================ 00:08:24.270 Vendor ID: 1b36 00:08:24.270 Subsystem Vendor ID: 1af4 00:08:24.270 Serial Number: 12340 00:08:24.270 Model Number: QEMU NVMe Ctrl 00:08:24.270 Firmware Version: 8.0.0 00:08:24.270 Recommended Arb Burst: 6 00:08:24.270 IEEE OUI Identifier: 00 54 52 00:08:24.270 Multi-path I/O 00:08:24.270 May have multiple subsystem ports: No 00:08:24.270 May have multiple controllers: No 00:08:24.270 Associated with SR-IOV VF: No 00:08:24.270 Max Data Transfer Size: 524288 00:08:24.270 Max Number of Namespaces: 256 00:08:24.270 Max Number of I/O Queues: 64 00:08:24.270 NVMe Specification Version (VS): 1.4 00:08:24.270 NVMe Specification Version (Identify): 1.4 00:08:24.270 Maximum Queue Entries: 2048 00:08:24.270 Contiguous Queues Required: Yes 00:08:24.270 Arbitration Mechanisms Supported 00:08:24.270 Weighted Round Robin: Not Supported 00:08:24.270 Vendor Specific: Not Supported 00:08:24.270 Reset Timeout: 7500 ms 00:08:24.270 Doorbell Stride: 4 bytes 00:08:24.270 NVM Subsystem Reset: Not Supported 00:08:24.270 Command Sets Supported 00:08:24.270 NVM Command Set: Supported 00:08:24.270 Boot Partition: Not Supported 00:08:24.270 Memory Page Size Minimum: 4096 bytes 00:08:24.270 Memory Page Size Maximum: 65536 bytes 00:08:24.270 Persistent Memory Region: Not Supported 00:08:24.270 Optional Asynchronous Events Supported 00:08:24.270 Namespace Attribute Notices: Supported 00:08:24.270 Firmware Activation Notices: Not Supported 00:08:24.270 ANA Change Notices: Not Supported 00:08:24.270 PLE Aggregate Log Change Notices: Not Supported 00:08:24.270 LBA Status Info Alert Notices: Not Supported 00:08:24.270 EGE Aggregate Log Change Notices: Not Supported 00:08:24.270 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.270 Zone Descriptor Change Notices: Not Supported 00:08:24.270 Discovery Log Change Notices: Not Supported 00:08:24.270 Controller Attributes 00:08:24.270 128-bit Host Identifier: Not Supported 00:08:24.270 Non-Operational Permissive Mode: Not Supported 00:08:24.270 NVM Sets: Not Supported 00:08:24.270 Read Recovery Levels: Not Supported 00:08:24.270 Endurance Groups: Not Supported 00:08:24.270 Predictable Latency Mode: Not Supported 00:08:24.270 Traffic Based Keep ALive: Not Supported 00:08:24.270 Namespace Granularity: Not Supported 00:08:24.270 SQ Associations: Not Supported 00:08:24.270 UUID List: Not Supported 00:08:24.270 Multi-Domain Subsystem: Not Supported 00:08:24.270 Fixed Capacity Management: Not Supported 00:08:24.270 Variable Capacity Management: Not Supported 00:08:24.270 Delete Endurance Group: Not Supported 00:08:24.270 Delete NVM Set: Not Supported 00:08:24.270 Extended LBA Formats Supported: Supported 00:08:24.270 Flexible Data Placement Supported: Not Supported 00:08:24.270 00:08:24.270 Controller Memory Buffer Support 00:08:24.270 ================================ 00:08:24.270 Supported: No 00:08:24.270 00:08:24.270 Persistent Memory Region Support 00:08:24.270 ================================ 00:08:24.270 Supported: No 00:08:24.270 00:08:24.270 Admin Command Set Attributes 00:08:24.270 ============================ 00:08:24.270 Security Send/Receive: Not Supported 00:08:24.270 Format NVM: Supported 00:08:24.270 Firmware Activate/Download: Not Supported 00:08:24.270 Namespace Management: Supported 00:08:24.270 Device Self-Test: Not Supported 00:08:24.270 Directives: Supported 00:08:24.270 NVMe-MI: Not Supported 00:08:24.270 Virtualization Management: Not Supported 00:08:24.270 Doorbell Buffer Config: Supported 00:08:24.270 Get LBA Status Capability: Not Supported 00:08:24.270 Command & Feature Lockdown Capability: Not Supported 00:08:24.270 Abort Command Limit: 4 00:08:24.270 Async Event Request Limit: 4 00:08:24.270 Number of Firmware Slots: N/A 00:08:24.270 Firmware Slot 1 Read-Only: N/A 00:08:24.270 Firmware Activation Without Reset: N/A 00:08:24.270 Multiple Update Detection Support: N/A 00:08:24.270 Firmware Update Granularity: No Information Provided 00:08:24.270 Per-Namespace SMART Log: Yes 00:08:24.270 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.270 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:24.270 Command Effects Log Page: Supported 00:08:24.270 Get Log Page Extended Data: Supported 00:08:24.270 Telemetry Log Pages: Not Supported 00:08:24.270 Persistent Event Log Pages: Not Supported 00:08:24.270 Supported Log Pages Log Page: May Support 00:08:24.270 Commands Supported & Effects Log Page: Not Supported 00:08:24.270 Feature Identifiers & Effects Log Page:May Support 00:08:24.270 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.271 Data Area 4 for Telemetry Log: Not Supported 00:08:24.271 Error Log Page Entries Supported: 1 00:08:24.271 Keep Alive: Not Supported 00:08:24.271 00:08:24.271 NVM Command Set Attributes 00:08:24.271 ========================== 00:08:24.271 Submission Queue Entry Size 00:08:24.271 Max: 64 00:08:24.271 Min: 64 00:08:24.271 Completion Queue Entry Size 00:08:24.271 Max: 16 00:08:24.271 Min: 16 00:08:24.271 Number of Namespaces: 256 00:08:24.271 Compare Command: Supported 00:08:24.271 Write Uncorrectable Command: Not Supported 00:08:24.271 Dataset Management Command: Supported 00:08:24.271 Write Zeroes Command: Supported 00:08:24.271 Set Features Save Field: Supported 00:08:24.271 Reservations: Not Supported 00:08:24.271 Timestamp: Supported 00:08:24.271 Copy: Supported 00:08:24.271 Volatile Write Cache: Present 00:08:24.271 Atomic Write Unit (Normal): 1 00:08:24.271 Atomic Write Unit (PFail): 1 00:08:24.271 Atomic Compare & Write Unit: 1 00:08:24.271 Fused Compare & Write: Not Supported 00:08:24.271 Scatter-Gather List 00:08:24.271 SGL Command Set: Supported 00:08:24.271 SGL Keyed: Not Supported 00:08:24.271 SGL Bit Bucket Descriptor: Not Supported 00:08:24.271 SGL Metadata Pointer: Not Supported 00:08:24.271 Oversized SGL: Not Supported 00:08:24.271 SGL Metadata Address: Not Supported 00:08:24.271 SGL Offset: Not Supported 00:08:24.271 Transport SGL Data Block: Not Supported 00:08:24.271 Replay Protected Memory Block: Not Supported 00:08:24.271 00:08:24.271 Firmware Slot Information 00:08:24.271 ========================= 00:08:24.271 Active slot: 1 00:08:24.271 Slot 1 Firmware Revision: 1.0 00:08:24.271 00:08:24.271 00:08:24.271 Commands Supported and Effects 00:08:24.271 ============================== 00:08:24.271 Admin Commands 00:08:24.271 -------------- 00:08:24.271 Delete I/O Submission Queue (00h): Supported 00:08:24.271 Create I/O Submission Queue (01h): Supported 00:08:24.271 Get Log Page (02h): Supported 00:08:24.271 Delete I/O Completion Queue (04h): Supported 00:08:24.271 Create I/O Completion Queue (05h): Supported 00:08:24.271 Identify (06h): Supported 00:08:24.271 Abort (08h): Supported 00:08:24.271 Set Features (09h): Supported 00:08:24.271 Get Features (0Ah): Supported 00:08:24.271 Asynchronous Event Request (0Ch): Supported 00:08:24.271 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.271 Directive Send (19h): Supported 00:08:24.271 Directive Receive (1Ah): Supported 00:08:24.271 Virtualization Management (1Ch): Supported 00:08:24.271 Doorbell Buffer Config (7Ch): Supported 00:08:24.271 Format NVM (80h): Supported LBA-Change 00:08:24.271 I/O Commands 00:08:24.271 ------------ 00:08:24.271 Flush (00h): Supported LBA-Change 00:08:24.271 Write (01h): Supported LBA-Change 00:08:24.271 Read (02h): Supported 00:08:24.271 Compare (05h): Supported 00:08:24.271 Write Zeroes (08h): Supported LBA-Change 00:08:24.271 Dataset Management (09h): Supported LBA-Change 00:08:24.271 Unknown (0Ch): Supported 00:08:24.271 Unknown (12h): Supported 00:08:24.271 Copy (19h): Supported LBA-Change 00:08:24.271 Unknown (1Dh): Supported LBA-Change 00:08:24.271 00:08:24.271 Error Log 00:08:24.271 ========= 00:08:24.271 00:08:24.271 Arbitration 00:08:24.271 =========== 00:08:24.271 Arbitration Burst: no limit 00:08:24.271 00:08:24.271 Power Management 00:08:24.271 ================ 00:08:24.271 Number of Power States: 1 00:08:24.271 Current Power State: Power State #0 00:08:24.271 Power State #0: 00:08:24.271 Max Power: 25.00 W 00:08:24.271 Non-Operational State: Operational 00:08:24.271 Entry Latency: 16 microseconds 00:08:24.271 Exit Latency: 4 microseconds 00:08:24.271 Relative Read Throughput: 0 00:08:24.271 Relative Read Latency: 0 00:08:24.271 Relative Write Throughput: 0 00:08:24.271 Relative Write Latency: 0 00:08:24.271 Idle Power: Not Reported 00:08:24.271 Active Power: Not Reported 00:08:24.271 Non-Operational Permissive Mode: Not Supported 00:08:24.271 00:08:24.271 Health Information 00:08:24.271 ================== 00:08:24.271 Critical Warnings: 00:08:24.271 Available Spare Space: OK 00:08:24.271 Temperature: OK 00:08:24.271 Device Reliability: OK 00:08:24.271 Read Only: No 00:08:24.271 Volatile Memory Backup: OK 00:08:24.271 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.271 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.271 Available Spare: 0% 00:08:24.271 Available Spare Threshold: 0% 00:08:24.271 Life Percentage Used: 0% 00:08:24.271 Data Units Read: 699 00:08:24.271 Data Units Written: 627 00:08:24.271 Host Read Commands: 36679 00:08:24.271 Host Write Commands: 36465 00:08:24.271 Controller Busy Time: 0 minutes 00:08:24.271 Power Cycles: 0 00:08:24.271 Power On Hours: 0 hours 00:08:24.271 Unsafe Shutdowns: 0 00:08:24.271 Unrecoverable Media Errors: 0 00:08:24.271 Lifetime Error Log Entries: 0 00:08:24.271 Warning Temperature Time: 0 minutes 00:08:24.271 Critical Temperature Time: 0 minutes 00:08:24.271 00:08:24.271 Number of Queues 00:08:24.271 ================ 00:08:24.271 Number of I/O Submission Queues: 64 00:08:24.271 Number of I/O Completion Queues: 64 00:08:24.271 00:08:24.271 ZNS Specific Controller Data 00:08:24.271 ============================ 00:08:24.271 Zone Append Size Limit: 0 00:08:24.271 00:08:24.271 00:08:24.271 Active Namespaces 00:08:24.271 ================= 00:08:24.271 Namespace ID:1 00:08:24.271 Error Recovery Timeout: Unlimited 00:08:24.271 Command Set Identifier: NVM (00h) 00:08:24.271 Deallocate: Supported 00:08:24.271 Deallocated/Unwritten Error: Supported 00:08:24.271 Deallocated Read Value: All 0x00 00:08:24.271 Deallocate in Write Zeroes: Not Supported 00:08:24.271 Deallocated Guard Field: 0xFFFF 00:08:24.271 Flush: Supported 00:08:24.271 Reservation: Not Supported 00:08:24.271 Metadata Transferred as: Separate Metadata Buffer 00:08:24.271 Namespace Sharing Capabilities: Private 00:08:24.271 Size (in LBAs): 1548666 (5GiB) 00:08:24.271 Capacity (in LBAs): 1548666 (5GiB) 00:08:24.271 Utilization (in LBAs): 1548666 (5GiB) 00:08:24.271 Thin Provisioning: Not Supported 00:08:24.271 Per-NS Atomic Units: No 00:08:24.271 Maximum Single Source Range Length: 128 00:08:24.271 Maximum Copy Length: 128 00:08:24.271 Maximum Source Range Count: 128 00:08:24.271 NGUID/EUI64 Never Reused: No 00:08:24.271 Namespace Write Protected: No 00:08:24.271 Number of LBA Formats: 8 00:08:24.271 Current LBA Format: LBA Format #07 00:08:24.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.271 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.271 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.271 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.271 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.271 LBA Forma[2024-11-03 10:02:52.564095] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 74737 terminated unexpected 00:08:24.271 t #05: Data Size: 4096 Metadata Size: 8 00:08:24.271 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.272 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.272 00:08:24.272 NVM Specific Namespace Data 00:08:24.272 =========================== 00:08:24.272 Logical Block Storage Tag Mask: 0 00:08:24.272 Protection Information Capabilities: 00:08:24.272 16b Guard Protection Information Storage Tag Support: No 00:08:24.272 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.272 Storage Tag Check Read Support: No 00:08:24.272 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.272 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.272 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.272 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.272 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.272 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.272 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.272 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.272 ===================================================== 00:08:24.272 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.272 ===================================================== 00:08:24.272 Controller Capabilities/Features 00:08:24.272 ================================ 00:08:24.272 Vendor ID: 1b36 00:08:24.272 Subsystem Vendor ID: 1af4 00:08:24.272 Serial Number: 12342 00:08:24.272 Model Number: QEMU NVMe Ctrl 00:08:24.272 Firmware Version: 8.0.0 00:08:24.272 Recommended Arb Burst: 6 00:08:24.272 IEEE OUI Identifier: 00 54 52 00:08:24.272 Multi-path I/O 00:08:24.272 May have multiple subsystem ports: No 00:08:24.272 May have multiple controllers: No 00:08:24.272 Associated with SR-IOV VF: No 00:08:24.272 Max Data Transfer Size: 524288 00:08:24.272 Max Number of Namespaces: 256 00:08:24.272 Max Number of I/O Queues: 64 00:08:24.272 NVMe Specification Version (VS): 1.4 00:08:24.272 NVMe Specification Version (Identify): 1.4 00:08:24.272 Maximum Queue Entries: 2048 00:08:24.272 Contiguous Queues Required: Yes 00:08:24.272 Arbitration Mechanisms Supported 00:08:24.272 Weighted Round Robin: Not Supported 00:08:24.272 Vendor Specific: Not Supported 00:08:24.272 Reset Timeout: 7500 ms 00:08:24.272 Doorbell Stride: 4 bytes 00:08:24.272 NVM Subsystem Reset: Not Supported 00:08:24.272 Command Sets Supported 00:08:24.272 NVM Command Set: Supported 00:08:24.272 Boot Partition: Not Supported 00:08:24.272 Memory Page Size Minimum: 4096 bytes 00:08:24.272 Memory Page Size Maximum: 65536 bytes 00:08:24.272 Persistent Memory Region: Not Supported 00:08:24.272 Optional Asynchronous Events Supported 00:08:24.272 Namespace Attribute Notices: Supported 00:08:24.272 Firmware Activation Notices: Not Supported 00:08:24.272 ANA Change Notices: Not Supported 00:08:24.272 PLE Aggregate Log Change Notices: Not Supported 00:08:24.272 LBA Status Info Alert Notices: Not Supported 00:08:24.272 EGE Aggregate Log Change Notices: Not Supported 00:08:24.272 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.272 Zone Descriptor Change Notices: Not Supported 00:08:24.272 Discovery Log Change Notices: Not Supported 00:08:24.272 Controller Attributes 00:08:24.272 128-bit Host Identifier: Not Supported 00:08:24.272 Non-Operational Permissive Mode: Not Supported 00:08:24.272 NVM Sets: Not Supported 00:08:24.272 Read Recovery Levels: Not Supported 00:08:24.272 Endurance Groups: Not Supported 00:08:24.272 Predictable Latency Mode: Not Supported 00:08:24.272 Traffic Based Keep ALive: Not Supported 00:08:24.272 Namespace Granularity: Not Supported 00:08:24.272 SQ Associations: Not Supported 00:08:24.272 UUID List: Not Supported 00:08:24.272 Multi-Domain Subsystem: Not Supported 00:08:24.272 Fixed Capacity Management: Not Supported 00:08:24.272 Variable Capacity Management: Not Supported 00:08:24.272 Delete Endurance Group: Not Supported 00:08:24.272 Delete NVM Set: Not Supported 00:08:24.272 Extended LBA Formats Supported: Supported 00:08:24.272 Flexible Data Placement Supported: Not Supported 00:08:24.272 00:08:24.272 Controller Memory Buffer Support 00:08:24.272 ================================ 00:08:24.272 Supported: No 00:08:24.272 00:08:24.272 Persistent Memory Region Support 00:08:24.272 ================================ 00:08:24.272 Supported: No 00:08:24.272 00:08:24.272 Admin Command Set Attributes 00:08:24.272 ============================ 00:08:24.272 Security Send/Receive: Not Supported 00:08:24.272 Format NVM: Supported 00:08:24.272 Firmware Activate/Download: Not Supported 00:08:24.272 Namespace Management: Supported 00:08:24.272 Device Self-Test: Not Supported 00:08:24.272 Directives: Supported 00:08:24.272 NVMe-MI: Not Supported 00:08:24.272 Virtualization Management: Not Supported 00:08:24.272 Doorbell Buffer Config: Supported 00:08:24.272 Get LBA Status Capability: Not Supported 00:08:24.272 Command & Feature Lockdown Capability: Not Supported 00:08:24.272 Abort Command Limit: 4 00:08:24.272 Async Event Request Limit: 4 00:08:24.272 Number of Firmware Slots: N/A 00:08:24.272 Firmware Slot 1 Read-Only: N/A 00:08:24.272 Firmware Activation Without Reset: N/A 00:08:24.272 Multiple Update Detection Support: N/A 00:08:24.272 Firmware Update Granularity: No Information Provided 00:08:24.272 Per-Namespace SMART Log: Yes 00:08:24.272 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.272 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:24.272 Command Effects Log Page: Supported 00:08:24.272 Get Log Page Extended Data: Supported 00:08:24.272 Telemetry Log Pages: Not Supported 00:08:24.272 Persistent Event Log Pages: Not Supported 00:08:24.272 Supported Log Pages Log Page: May Support 00:08:24.272 Commands Supported & Effects Log Page: Not Supported 00:08:24.272 Feature Identifiers & Effects Log Page:May Support 00:08:24.272 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.273 Data Area 4 for Telemetry Log: Not Supported 00:08:24.273 Error Log Page Entries Supported: 1 00:08:24.273 Keep Alive: Not Supported 00:08:24.273 00:08:24.273 NVM Command Set Attributes 00:08:24.273 ========================== 00:08:24.273 Submission Queue Entry Size 00:08:24.273 Max: 64 00:08:24.273 Min: 64 00:08:24.273 Completion Queue Entry Size 00:08:24.273 Max: 16 00:08:24.273 Min: 16 00:08:24.273 Number of Namespaces: 256 00:08:24.273 Compare Command: Supported 00:08:24.273 Write Uncorrectable Command: Not Supported 00:08:24.273 Dataset Management Command: Supported 00:08:24.273 Write Zeroes Command: Supported 00:08:24.273 Set Features Save Field: Supported 00:08:24.273 Reservations: Not Supported 00:08:24.273 Timestamp: Supported 00:08:24.273 Copy: Supported 00:08:24.273 Volatile Write Cache: Present 00:08:24.273 Atomic Write Unit (Normal): 1 00:08:24.273 Atomic Write Unit (PFail): 1 00:08:24.273 Atomic Compare & Write Unit: 1 00:08:24.273 Fused Compare & Write: Not Supported 00:08:24.273 Scatter-Gather List 00:08:24.273 SGL Command Set: Supported 00:08:24.273 SGL Keyed: Not Supported 00:08:24.273 SGL Bit Bucket Descriptor: Not Supported 00:08:24.273 SGL Metadata Pointer: Not Supported 00:08:24.273 Oversized SGL: Not Supported 00:08:24.273 SGL Metadata Address: Not Supported 00:08:24.273 SGL Offset: Not Supported 00:08:24.273 Transport SGL Data Block: Not Supported 00:08:24.273 Replay Protected Memory Block: Not Supported 00:08:24.273 00:08:24.273 Firmware Slot Information 00:08:24.273 ========================= 00:08:24.273 Active slot: 1 00:08:24.273 Slot 1 Firmware Revision: 1.0 00:08:24.273 00:08:24.273 00:08:24.273 Commands Supported and Effects 00:08:24.273 ============================== 00:08:24.273 Admin Commands 00:08:24.273 -------------- 00:08:24.273 Delete I/O Submission Queue (00h): Supported 00:08:24.273 Create I/O Submission Queue (01h): Supported 00:08:24.273 Get Log Page (02h): Supported 00:08:24.273 Delete I/O Completion Queue (04h): Supported 00:08:24.273 Create I/O Completion Queue (05h): Supported 00:08:24.273 Identify (06h): Supported 00:08:24.273 Abort (08h): Supported 00:08:24.273 Set Features (09h): Supported 00:08:24.273 Get Features (0Ah): Supported 00:08:24.273 Asynchronous Event Request (0Ch): Supported 00:08:24.273 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.273 Directive Send (19h): Supported 00:08:24.273 Directive Receive (1Ah): Supported 00:08:24.273 Virtualization Management (1Ch): Supported 00:08:24.273 Doorbell Buffer Config (7Ch): Supported 00:08:24.273 Format NVM (80h): Supported LBA-Change 00:08:24.273 I/O Commands 00:08:24.273 ------------ 00:08:24.273 Flush (00h): Supported LBA-Change 00:08:24.273 Write (01h): Supported LBA-Change 00:08:24.273 Read (02h): Supported 00:08:24.273 Compare (05h): Supported 00:08:24.273 Write Zeroes (08h): Supported LBA-Change 00:08:24.273 Dataset Management (09h): Supported LBA-Change 00:08:24.273 Unknown (0Ch): Supported 00:08:24.273 Unknown (12h): Supported 00:08:24.273 Copy (19h): Supported LBA-Change 00:08:24.273 Unknown (1Dh): Supported LBA-Change 00:08:24.273 00:08:24.273 Error Log 00:08:24.273 ========= 00:08:24.273 00:08:24.273 Arbitration 00:08:24.273 =========== 00:08:24.273 Arbitration Burst: no limit 00:08:24.273 00:08:24.273 Power Management 00:08:24.273 ================ 00:08:24.273 Number of Power States: 1 00:08:24.273 Current Power State: Power State #0 00:08:24.273 Power State #0: 00:08:24.273 Max Power: 25.00 W 00:08:24.273 Non-Operational State: Operational 00:08:24.273 Entry Latency: 16 microseconds 00:08:24.273 Exit Latency: 4 microseconds 00:08:24.273 Relative Read Throughput: 0 00:08:24.273 Relative Read Latency: 0 00:08:24.273 Relative Write Throughput: 0 00:08:24.273 Relative Write Latency: 0 00:08:24.273 Idle Power: Not Reported 00:08:24.273 Active Power: Not Reported 00:08:24.273 Non-Operational Permissive Mode: Not Supported 00:08:24.273 00:08:24.273 Health Information 00:08:24.273 ================== 00:08:24.273 Critical Warnings: 00:08:24.273 Available Spare Space: OK 00:08:24.273 Temperature: OK 00:08:24.273 Device Reliability: OK 00:08:24.273 Read Only: No 00:08:24.273 Volatile Memory Backup: OK 00:08:24.273 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.273 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.273 Available Spare: 0% 00:08:24.273 Available Spare Threshold: 0% 00:08:24.273 Life Percentage Used: 0% 00:08:24.273 Data Units Read: 2243 00:08:24.273 Data Units Written: 2030 00:08:24.273 Host Read Commands: 112203 00:08:24.273 Host Write Commands: 110473 00:08:24.273 Controller Busy Time: 0 minutes 00:08:24.273 Power Cycles: 0 00:08:24.273 Power On Hours: 0 hours 00:08:24.273 Unsafe Shutdowns: 0 00:08:24.273 Unrecoverable Media Errors: 0 00:08:24.273 Lifetime Error Log Entries: 0 00:08:24.273 Warning Temperature Time: 0 minutes 00:08:24.273 Critical Temperature Time: 0 minutes 00:08:24.273 00:08:24.273 Number of Queues 00:08:24.273 ================ 00:08:24.273 Number of I/O Submission Queues: 64 00:08:24.273 Number of I/O Completion Queues: 64 00:08:24.273 00:08:24.273 ZNS Specific Controller Data 00:08:24.273 ============================ 00:08:24.273 Zone Append Size Limit: 0 00:08:24.273 00:08:24.273 00:08:24.273 Active Namespaces 00:08:24.273 ================= 00:08:24.273 Namespace ID:1 00:08:24.273 Error Recovery Timeout: Unlimited 00:08:24.273 Command Set Identifier: NVM (00h) 00:08:24.273 Deallocate: Supported 00:08:24.273 Deallocated/Unwritten Error: Supported 00:08:24.273 Deallocated Read Value: All 0x00 00:08:24.273 Deallocate in Write Zeroes: Not Supported 00:08:24.273 Deallocated Guard Field: 0xFFFF 00:08:24.273 Flush: Supported 00:08:24.273 Reservation: Not Supported 00:08:24.273 Namespace Sharing Capabilities: Private 00:08:24.273 Size (in LBAs): 1048576 (4GiB) 00:08:24.273 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.273 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.273 Thin Provisioning: Not Supported 00:08:24.273 Per-NS Atomic Units: No 00:08:24.273 Maximum Single Source Range Length: 128 00:08:24.273 Maximum Copy Length: 128 00:08:24.273 Maximum Source Range Count: 128 00:08:24.273 NGUID/EUI64 Never Reused: No 00:08:24.273 Namespace Write Protected: No 00:08:24.273 Number of LBA Formats: 8 00:08:24.274 Current LBA Format: LBA Format #04 00:08:24.274 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.274 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.274 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.274 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.274 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.274 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.274 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.274 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.274 00:08:24.274 NVM Specific Namespace Data 00:08:24.274 =========================== 00:08:24.274 Logical Block Storage Tag Mask: 0 00:08:24.274 Protection Information Capabilities: 00:08:24.274 16b Guard Protection Information Storage Tag Support: No 00:08:24.274 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.274 Storage Tag Check Read Support: No 00:08:24.274 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Namespace ID:2 00:08:24.274 Error Recovery Timeout: Unlimited 00:08:24.274 Command Set Identifier: NVM (00h) 00:08:24.274 Deallocate: Supported 00:08:24.274 Deallocated/Unwritten Error: Supported 00:08:24.274 Deallocated Read Value: All 0x00 00:08:24.274 Deallocate in Write Zeroes: Not Supported 00:08:24.274 Deallocated Guard Field: 0xFFFF 00:08:24.274 Flush: Supported 00:08:24.274 Reservation: Not Supported 00:08:24.274 Namespace Sharing Capabilities: Private 00:08:24.274 Size (in LBAs): 1048576 (4GiB) 00:08:24.274 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.274 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.274 Thin Provisioning: Not Supported 00:08:24.274 Per-NS Atomic Units: No 00:08:24.274 Maximum Single Source Range Length: 128 00:08:24.274 Maximum Copy Length: 128 00:08:24.274 Maximum Source Range Count: 128 00:08:24.274 NGUID/EUI64 Never Reused: No 00:08:24.274 Namespace Write Protected: No 00:08:24.274 Number of LBA Formats: 8 00:08:24.274 Current LBA Format: LBA Format #04 00:08:24.274 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.274 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.274 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.274 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.274 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.274 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.274 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.274 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.274 00:08:24.274 NVM Specific Namespace Data 00:08:24.274 =========================== 00:08:24.274 Logical Block Storage Tag Mask: 0 00:08:24.274 Protection Information Capabilities: 00:08:24.274 16b Guard Protection Information Storage Tag Support: No 00:08:24.274 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.274 Storage Tag Check Read Support: No 00:08:24.274 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Namespace ID:3 00:08:24.274 Error Recovery Timeout: Unlimited 00:08:24.274 Command Set Identifier: NVM (00h) 00:08:24.274 Deallocate: Supported 00:08:24.274 Deallocated/Unwritten Error: Supported 00:08:24.274 Deallocated Read Value: All 0x00 00:08:24.274 Deallocate in Write Zeroes: Not Supported 00:08:24.274 Deallocated Guard Field: 0xFFFF 00:08:24.274 Flush: Supported 00:08:24.274 Reservation: Not Supported 00:08:24.274 Namespace Sharing Capabilities: Private 00:08:24.274 Size (in LBAs): 1048576 (4GiB) 00:08:24.274 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.274 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.274 Thin Provisioning: Not Supported 00:08:24.274 Per-NS Atomic Units: No 00:08:24.274 Maximum Single Source Range Length: 128 00:08:24.274 Maximum Copy Length: 128 00:08:24.274 Maximum Source Range Count: 128 00:08:24.274 NGUID/EUI64 Never Reused: No 00:08:24.274 Namespace Write Protected: No 00:08:24.274 Number of LBA Formats: 8 00:08:24.274 Current LBA Format: LBA Format #04 00:08:24.274 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.274 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.274 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.274 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.274 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.274 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.274 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.274 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.274 00:08:24.274 NVM Specific Namespace Data 00:08:24.274 =========================== 00:08:24.274 Logical Block Storage Tag Mask: 0 00:08:24.274 Protection Information Capabilities: 00:08:24.274 16b Guard Protection Information Storage Tag Support: No 00:08:24.274 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.274 Storage Tag Check Read Support: No 00:08:24.274 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.274 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:24.274 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:24.537 ===================================================== 00:08:24.537 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.537 ===================================================== 00:08:24.537 Controller Capabilities/Features 00:08:24.537 ================================ 00:08:24.537 Vendor ID: 1b36 00:08:24.537 Subsystem Vendor ID: 1af4 00:08:24.537 Serial Number: 12340 00:08:24.537 Model Number: QEMU NVMe Ctrl 00:08:24.537 Firmware Version: 8.0.0 00:08:24.537 Recommended Arb Burst: 6 00:08:24.537 IEEE OUI Identifier: 00 54 52 00:08:24.537 Multi-path I/O 00:08:24.538 May have multiple subsystem ports: No 00:08:24.538 May have multiple controllers: No 00:08:24.538 Associated with SR-IOV VF: No 00:08:24.538 Max Data Transfer Size: 524288 00:08:24.538 Max Number of Namespaces: 256 00:08:24.538 Max Number of I/O Queues: 64 00:08:24.538 NVMe Specification Version (VS): 1.4 00:08:24.538 NVMe Specification Version (Identify): 1.4 00:08:24.538 Maximum Queue Entries: 2048 00:08:24.538 Contiguous Queues Required: Yes 00:08:24.538 Arbitration Mechanisms Supported 00:08:24.538 Weighted Round Robin: Not Supported 00:08:24.538 Vendor Specific: Not Supported 00:08:24.538 Reset Timeout: 7500 ms 00:08:24.538 Doorbell Stride: 4 bytes 00:08:24.538 NVM Subsystem Reset: Not Supported 00:08:24.538 Command Sets Supported 00:08:24.538 NVM Command Set: Supported 00:08:24.538 Boot Partition: Not Supported 00:08:24.538 Memory Page Size Minimum: 4096 bytes 00:08:24.538 Memory Page Size Maximum: 65536 bytes 00:08:24.538 Persistent Memory Region: Not Supported 00:08:24.538 Optional Asynchronous Events Supported 00:08:24.538 Namespace Attribute Notices: Supported 00:08:24.538 Firmware Activation Notices: Not Supported 00:08:24.538 ANA Change Notices: Not Supported 00:08:24.538 PLE Aggregate Log Change Notices: Not Supported 00:08:24.538 LBA Status Info Alert Notices: Not Supported 00:08:24.538 EGE Aggregate Log Change Notices: Not Supported 00:08:24.538 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.538 Zone Descriptor Change Notices: Not Supported 00:08:24.538 Discovery Log Change Notices: Not Supported 00:08:24.538 Controller Attributes 00:08:24.538 128-bit Host Identifier: Not Supported 00:08:24.538 Non-Operational Permissive Mode: Not Supported 00:08:24.538 NVM Sets: Not Supported 00:08:24.538 Read Recovery Levels: Not Supported 00:08:24.538 Endurance Groups: Not Supported 00:08:24.538 Predictable Latency Mode: Not Supported 00:08:24.538 Traffic Based Keep ALive: Not Supported 00:08:24.538 Namespace Granularity: Not Supported 00:08:24.538 SQ Associations: Not Supported 00:08:24.538 UUID List: Not Supported 00:08:24.538 Multi-Domain Subsystem: Not Supported 00:08:24.538 Fixed Capacity Management: Not Supported 00:08:24.538 Variable Capacity Management: Not Supported 00:08:24.538 Delete Endurance Group: Not Supported 00:08:24.538 Delete NVM Set: Not Supported 00:08:24.538 Extended LBA Formats Supported: Supported 00:08:24.538 Flexible Data Placement Supported: Not Supported 00:08:24.538 00:08:24.538 Controller Memory Buffer Support 00:08:24.538 ================================ 00:08:24.538 Supported: No 00:08:24.538 00:08:24.538 Persistent Memory Region Support 00:08:24.538 ================================ 00:08:24.538 Supported: No 00:08:24.538 00:08:24.538 Admin Command Set Attributes 00:08:24.538 ============================ 00:08:24.538 Security Send/Receive: Not Supported 00:08:24.538 Format NVM: Supported 00:08:24.538 Firmware Activate/Download: Not Supported 00:08:24.538 Namespace Management: Supported 00:08:24.538 Device Self-Test: Not Supported 00:08:24.538 Directives: Supported 00:08:24.538 NVMe-MI: Not Supported 00:08:24.538 Virtualization Management: Not Supported 00:08:24.538 Doorbell Buffer Config: Supported 00:08:24.538 Get LBA Status Capability: Not Supported 00:08:24.538 Command & Feature Lockdown Capability: Not Supported 00:08:24.538 Abort Command Limit: 4 00:08:24.538 Async Event Request Limit: 4 00:08:24.538 Number of Firmware Slots: N/A 00:08:24.538 Firmware Slot 1 Read-Only: N/A 00:08:24.538 Firmware Activation Without Reset: N/A 00:08:24.538 Multiple Update Detection Support: N/A 00:08:24.538 Firmware Update Granularity: No Information Provided 00:08:24.538 Per-Namespace SMART Log: Yes 00:08:24.538 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.538 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:24.538 Command Effects Log Page: Supported 00:08:24.538 Get Log Page Extended Data: Supported 00:08:24.538 Telemetry Log Pages: Not Supported 00:08:24.538 Persistent Event Log Pages: Not Supported 00:08:24.538 Supported Log Pages Log Page: May Support 00:08:24.538 Commands Supported & Effects Log Page: Not Supported 00:08:24.538 Feature Identifiers & Effects Log Page:May Support 00:08:24.538 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.538 Data Area 4 for Telemetry Log: Not Supported 00:08:24.538 Error Log Page Entries Supported: 1 00:08:24.538 Keep Alive: Not Supported 00:08:24.538 00:08:24.538 NVM Command Set Attributes 00:08:24.538 ========================== 00:08:24.538 Submission Queue Entry Size 00:08:24.538 Max: 64 00:08:24.538 Min: 64 00:08:24.538 Completion Queue Entry Size 00:08:24.538 Max: 16 00:08:24.538 Min: 16 00:08:24.538 Number of Namespaces: 256 00:08:24.538 Compare Command: Supported 00:08:24.538 Write Uncorrectable Command: Not Supported 00:08:24.538 Dataset Management Command: Supported 00:08:24.538 Write Zeroes Command: Supported 00:08:24.538 Set Features Save Field: Supported 00:08:24.538 Reservations: Not Supported 00:08:24.538 Timestamp: Supported 00:08:24.538 Copy: Supported 00:08:24.538 Volatile Write Cache: Present 00:08:24.538 Atomic Write Unit (Normal): 1 00:08:24.538 Atomic Write Unit (PFail): 1 00:08:24.538 Atomic Compare & Write Unit: 1 00:08:24.538 Fused Compare & Write: Not Supported 00:08:24.538 Scatter-Gather List 00:08:24.538 SGL Command Set: Supported 00:08:24.538 SGL Keyed: Not Supported 00:08:24.538 SGL Bit Bucket Descriptor: Not Supported 00:08:24.538 SGL Metadata Pointer: Not Supported 00:08:24.538 Oversized SGL: Not Supported 00:08:24.538 SGL Metadata Address: Not Supported 00:08:24.538 SGL Offset: Not Supported 00:08:24.538 Transport SGL Data Block: Not Supported 00:08:24.538 Replay Protected Memory Block: Not Supported 00:08:24.538 00:08:24.538 Firmware Slot Information 00:08:24.538 ========================= 00:08:24.538 Active slot: 1 00:08:24.538 Slot 1 Firmware Revision: 1.0 00:08:24.538 00:08:24.538 00:08:24.538 Commands Supported and Effects 00:08:24.538 ============================== 00:08:24.538 Admin Commands 00:08:24.538 -------------- 00:08:24.538 Delete I/O Submission Queue (00h): Supported 00:08:24.538 Create I/O Submission Queue (01h): Supported 00:08:24.538 Get Log Page (02h): Supported 00:08:24.538 Delete I/O Completion Queue (04h): Supported 00:08:24.538 Create I/O Completion Queue (05h): Supported 00:08:24.538 Identify (06h): Supported 00:08:24.538 Abort (08h): Supported 00:08:24.538 Set Features (09h): Supported 00:08:24.538 Get Features (0Ah): Supported 00:08:24.538 Asynchronous Event Request (0Ch): Supported 00:08:24.538 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.538 Directive Send (19h): Supported 00:08:24.538 Directive Receive (1Ah): Supported 00:08:24.538 Virtualization Management (1Ch): Supported 00:08:24.538 Doorbell Buffer Config (7Ch): Supported 00:08:24.538 Format NVM (80h): Supported LBA-Change 00:08:24.538 I/O Commands 00:08:24.538 ------------ 00:08:24.538 Flush (00h): Supported LBA-Change 00:08:24.538 Write (01h): Supported LBA-Change 00:08:24.538 Read (02h): Supported 00:08:24.538 Compare (05h): Supported 00:08:24.539 Write Zeroes (08h): Supported LBA-Change 00:08:24.539 Dataset Management (09h): Supported LBA-Change 00:08:24.539 Unknown (0Ch): Supported 00:08:24.539 Unknown (12h): Supported 00:08:24.539 Copy (19h): Supported LBA-Change 00:08:24.539 Unknown (1Dh): Supported LBA-Change 00:08:24.539 00:08:24.539 Error Log 00:08:24.539 ========= 00:08:24.539 00:08:24.539 Arbitration 00:08:24.539 =========== 00:08:24.539 Arbitration Burst: no limit 00:08:24.539 00:08:24.539 Power Management 00:08:24.539 ================ 00:08:24.539 Number of Power States: 1 00:08:24.539 Current Power State: Power State #0 00:08:24.539 Power State #0: 00:08:24.539 Max Power: 25.00 W 00:08:24.539 Non-Operational State: Operational 00:08:24.539 Entry Latency: 16 microseconds 00:08:24.539 Exit Latency: 4 microseconds 00:08:24.539 Relative Read Throughput: 0 00:08:24.539 Relative Read Latency: 0 00:08:24.539 Relative Write Throughput: 0 00:08:24.539 Relative Write Latency: 0 00:08:24.539 Idle Power: Not Reported 00:08:24.539 Active Power: Not Reported 00:08:24.539 Non-Operational Permissive Mode: Not Supported 00:08:24.539 00:08:24.539 Health Information 00:08:24.539 ================== 00:08:24.539 Critical Warnings: 00:08:24.539 Available Spare Space: OK 00:08:24.539 Temperature: OK 00:08:24.539 Device Reliability: OK 00:08:24.539 Read Only: No 00:08:24.539 Volatile Memory Backup: OK 00:08:24.539 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.539 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.539 Available Spare: 0% 00:08:24.539 Available Spare Threshold: 0% 00:08:24.539 Life Percentage Used: 0% 00:08:24.539 Data Units Read: 699 00:08:24.539 Data Units Written: 627 00:08:24.539 Host Read Commands: 36679 00:08:24.539 Host Write Commands: 36465 00:08:24.539 Controller Busy Time: 0 minutes 00:08:24.539 Power Cycles: 0 00:08:24.539 Power On Hours: 0 hours 00:08:24.539 Unsafe Shutdowns: 0 00:08:24.539 Unrecoverable Media Errors: 0 00:08:24.539 Lifetime Error Log Entries: 0 00:08:24.539 Warning Temperature Time: 0 minutes 00:08:24.539 Critical Temperature Time: 0 minutes 00:08:24.539 00:08:24.539 Number of Queues 00:08:24.539 ================ 00:08:24.539 Number of I/O Submission Queues: 64 00:08:24.539 Number of I/O Completion Queues: 64 00:08:24.539 00:08:24.539 ZNS Specific Controller Data 00:08:24.539 ============================ 00:08:24.539 Zone Append Size Limit: 0 00:08:24.539 00:08:24.539 00:08:24.539 Active Namespaces 00:08:24.539 ================= 00:08:24.539 Namespace ID:1 00:08:24.539 Error Recovery Timeout: Unlimited 00:08:24.539 Command Set Identifier: NVM (00h) 00:08:24.539 Deallocate: Supported 00:08:24.539 Deallocated/Unwritten Error: Supported 00:08:24.539 Deallocated Read Value: All 0x00 00:08:24.539 Deallocate in Write Zeroes: Not Supported 00:08:24.539 Deallocated Guard Field: 0xFFFF 00:08:24.539 Flush: Supported 00:08:24.539 Reservation: Not Supported 00:08:24.539 Metadata Transferred as: Separate Metadata Buffer 00:08:24.539 Namespace Sharing Capabilities: Private 00:08:24.539 Size (in LBAs): 1548666 (5GiB) 00:08:24.539 Capacity (in LBAs): 1548666 (5GiB) 00:08:24.539 Utilization (in LBAs): 1548666 (5GiB) 00:08:24.539 Thin Provisioning: Not Supported 00:08:24.539 Per-NS Atomic Units: No 00:08:24.539 Maximum Single Source Range Length: 128 00:08:24.539 Maximum Copy Length: 128 00:08:24.539 Maximum Source Range Count: 128 00:08:24.539 NGUID/EUI64 Never Reused: No 00:08:24.539 Namespace Write Protected: No 00:08:24.539 Number of LBA Formats: 8 00:08:24.539 Current LBA Format: LBA Format #07 00:08:24.539 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.539 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.539 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.539 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.539 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.539 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.539 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.539 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.539 00:08:24.539 NVM Specific Namespace Data 00:08:24.539 =========================== 00:08:24.539 Logical Block Storage Tag Mask: 0 00:08:24.539 Protection Information Capabilities: 00:08:24.539 16b Guard Protection Information Storage Tag Support: No 00:08:24.539 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.539 Storage Tag Check Read Support: No 00:08:24.539 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.539 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.539 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.539 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.539 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.539 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.539 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.539 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.539 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:24.539 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:24.803 ===================================================== 00:08:24.803 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.803 ===================================================== 00:08:24.803 Controller Capabilities/Features 00:08:24.803 ================================ 00:08:24.803 Vendor ID: 1b36 00:08:24.803 Subsystem Vendor ID: 1af4 00:08:24.803 Serial Number: 12341 00:08:24.803 Model Number: QEMU NVMe Ctrl 00:08:24.803 Firmware Version: 8.0.0 00:08:24.803 Recommended Arb Burst: 6 00:08:24.803 IEEE OUI Identifier: 00 54 52 00:08:24.803 Multi-path I/O 00:08:24.803 May have multiple subsystem ports: No 00:08:24.803 May have multiple controllers: No 00:08:24.803 Associated with SR-IOV VF: No 00:08:24.803 Max Data Transfer Size: 524288 00:08:24.803 Max Number of Namespaces: 256 00:08:24.803 Max Number of I/O Queues: 64 00:08:24.803 NVMe Specification Version (VS): 1.4 00:08:24.803 NVMe Specification Version (Identify): 1.4 00:08:24.803 Maximum Queue Entries: 2048 00:08:24.803 Contiguous Queues Required: Yes 00:08:24.803 Arbitration Mechanisms Supported 00:08:24.803 Weighted Round Robin: Not Supported 00:08:24.803 Vendor Specific: Not Supported 00:08:24.803 Reset Timeout: 7500 ms 00:08:24.803 Doorbell Stride: 4 bytes 00:08:24.803 NVM Subsystem Reset: Not Supported 00:08:24.803 Command Sets Supported 00:08:24.803 NVM Command Set: Supported 00:08:24.803 Boot Partition: Not Supported 00:08:24.803 Memory Page Size Minimum: 4096 bytes 00:08:24.803 Memory Page Size Maximum: 65536 bytes 00:08:24.803 Persistent Memory Region: Not Supported 00:08:24.803 Optional Asynchronous Events Supported 00:08:24.803 Namespace Attribute Notices: Supported 00:08:24.803 Firmware Activation Notices: Not Supported 00:08:24.803 ANA Change Notices: Not Supported 00:08:24.803 PLE Aggregate Log Change Notices: Not Supported 00:08:24.803 LBA Status Info Alert Notices: Not Supported 00:08:24.803 EGE Aggregate Log Change Notices: Not Supported 00:08:24.803 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.803 Zone Descriptor Change Notices: Not Supported 00:08:24.803 Discovery Log Change Notices: Not Supported 00:08:24.803 Controller Attributes 00:08:24.803 128-bit Host Identifier: Not Supported 00:08:24.803 Non-Operational Permissive Mode: Not Supported 00:08:24.804 NVM Sets: Not Supported 00:08:24.804 Read Recovery Levels: Not Supported 00:08:24.804 Endurance Groups: Not Supported 00:08:24.804 Predictable Latency Mode: Not Supported 00:08:24.804 Traffic Based Keep ALive: Not Supported 00:08:24.804 Namespace Granularity: Not Supported 00:08:24.804 SQ Associations: Not Supported 00:08:24.804 UUID List: Not Supported 00:08:24.804 Multi-Domain Subsystem: Not Supported 00:08:24.804 Fixed Capacity Management: Not Supported 00:08:24.804 Variable Capacity Management: Not Supported 00:08:24.804 Delete Endurance Group: Not Supported 00:08:24.804 Delete NVM Set: Not Supported 00:08:24.804 Extended LBA Formats Supported: Supported 00:08:24.804 Flexible Data Placement Supported: Not Supported 00:08:24.804 00:08:24.804 Controller Memory Buffer Support 00:08:24.804 ================================ 00:08:24.804 Supported: No 00:08:24.804 00:08:24.804 Persistent Memory Region Support 00:08:24.804 ================================ 00:08:24.804 Supported: No 00:08:24.804 00:08:24.804 Admin Command Set Attributes 00:08:24.804 ============================ 00:08:24.804 Security Send/Receive: Not Supported 00:08:24.804 Format NVM: Supported 00:08:24.804 Firmware Activate/Download: Not Supported 00:08:24.804 Namespace Management: Supported 00:08:24.804 Device Self-Test: Not Supported 00:08:24.804 Directives: Supported 00:08:24.804 NVMe-MI: Not Supported 00:08:24.804 Virtualization Management: Not Supported 00:08:24.804 Doorbell Buffer Config: Supported 00:08:24.804 Get LBA Status Capability: Not Supported 00:08:24.804 Command & Feature Lockdown Capability: Not Supported 00:08:24.804 Abort Command Limit: 4 00:08:24.804 Async Event Request Limit: 4 00:08:24.804 Number of Firmware Slots: N/A 00:08:24.804 Firmware Slot 1 Read-Only: N/A 00:08:24.804 Firmware Activation Without Reset: N/A 00:08:24.804 Multiple Update Detection Support: N/A 00:08:24.804 Firmware Update Granularity: No Information Provided 00:08:24.804 Per-Namespace SMART Log: Yes 00:08:24.804 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.804 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:24.804 Command Effects Log Page: Supported 00:08:24.804 Get Log Page Extended Data: Supported 00:08:24.804 Telemetry Log Pages: Not Supported 00:08:24.804 Persistent Event Log Pages: Not Supported 00:08:24.804 Supported Log Pages Log Page: May Support 00:08:24.804 Commands Supported & Effects Log Page: Not Supported 00:08:24.804 Feature Identifiers & Effects Log Page:May Support 00:08:24.804 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.804 Data Area 4 for Telemetry Log: Not Supported 00:08:24.804 Error Log Page Entries Supported: 1 00:08:24.804 Keep Alive: Not Supported 00:08:24.804 00:08:24.804 NVM Command Set Attributes 00:08:24.804 ========================== 00:08:24.804 Submission Queue Entry Size 00:08:24.804 Max: 64 00:08:24.804 Min: 64 00:08:24.804 Completion Queue Entry Size 00:08:24.804 Max: 16 00:08:24.804 Min: 16 00:08:24.804 Number of Namespaces: 256 00:08:24.804 Compare Command: Supported 00:08:24.804 Write Uncorrectable Command: Not Supported 00:08:24.804 Dataset Management Command: Supported 00:08:24.804 Write Zeroes Command: Supported 00:08:24.804 Set Features Save Field: Supported 00:08:24.804 Reservations: Not Supported 00:08:24.804 Timestamp: Supported 00:08:24.804 Copy: Supported 00:08:24.804 Volatile Write Cache: Present 00:08:24.804 Atomic Write Unit (Normal): 1 00:08:24.804 Atomic Write Unit (PFail): 1 00:08:24.804 Atomic Compare & Write Unit: 1 00:08:24.804 Fused Compare & Write: Not Supported 00:08:24.804 Scatter-Gather List 00:08:24.804 SGL Command Set: Supported 00:08:24.804 SGL Keyed: Not Supported 00:08:24.804 SGL Bit Bucket Descriptor: Not Supported 00:08:24.804 SGL Metadata Pointer: Not Supported 00:08:24.804 Oversized SGL: Not Supported 00:08:24.804 SGL Metadata Address: Not Supported 00:08:24.804 SGL Offset: Not Supported 00:08:24.804 Transport SGL Data Block: Not Supported 00:08:24.804 Replay Protected Memory Block: Not Supported 00:08:24.804 00:08:24.804 Firmware Slot Information 00:08:24.804 ========================= 00:08:24.804 Active slot: 1 00:08:24.804 Slot 1 Firmware Revision: 1.0 00:08:24.804 00:08:24.804 00:08:24.804 Commands Supported and Effects 00:08:24.804 ============================== 00:08:24.804 Admin Commands 00:08:24.804 -------------- 00:08:24.804 Delete I/O Submission Queue (00h): Supported 00:08:24.804 Create I/O Submission Queue (01h): Supported 00:08:24.804 Get Log Page (02h): Supported 00:08:24.804 Delete I/O Completion Queue (04h): Supported 00:08:24.804 Create I/O Completion Queue (05h): Supported 00:08:24.804 Identify (06h): Supported 00:08:24.804 Abort (08h): Supported 00:08:24.804 Set Features (09h): Supported 00:08:24.804 Get Features (0Ah): Supported 00:08:24.804 Asynchronous Event Request (0Ch): Supported 00:08:24.804 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.804 Directive Send (19h): Supported 00:08:24.804 Directive Receive (1Ah): Supported 00:08:24.804 Virtualization Management (1Ch): Supported 00:08:24.804 Doorbell Buffer Config (7Ch): Supported 00:08:24.804 Format NVM (80h): Supported LBA-Change 00:08:24.804 I/O Commands 00:08:24.804 ------------ 00:08:24.804 Flush (00h): Supported LBA-Change 00:08:24.804 Write (01h): Supported LBA-Change 00:08:24.804 Read (02h): Supported 00:08:24.804 Compare (05h): Supported 00:08:24.804 Write Zeroes (08h): Supported LBA-Change 00:08:24.804 Dataset Management (09h): Supported LBA-Change 00:08:24.804 Unknown (0Ch): Supported 00:08:24.804 Unknown (12h): Supported 00:08:24.804 Copy (19h): Supported LBA-Change 00:08:24.804 Unknown (1Dh): Supported LBA-Change 00:08:24.804 00:08:24.804 Error Log 00:08:24.804 ========= 00:08:24.804 00:08:24.804 Arbitration 00:08:24.804 =========== 00:08:24.804 Arbitration Burst: no limit 00:08:24.804 00:08:24.804 Power Management 00:08:24.804 ================ 00:08:24.804 Number of Power States: 1 00:08:24.804 Current Power State: Power State #0 00:08:24.804 Power State #0: 00:08:24.804 Max Power: 25.00 W 00:08:24.804 Non-Operational State: Operational 00:08:24.804 Entry Latency: 16 microseconds 00:08:24.804 Exit Latency: 4 microseconds 00:08:24.804 Relative Read Throughput: 0 00:08:24.804 Relative Read Latency: 0 00:08:24.804 Relative Write Throughput: 0 00:08:24.804 Relative Write Latency: 0 00:08:24.804 Idle Power: Not Reported 00:08:24.804 Active Power: Not Reported 00:08:24.804 Non-Operational Permissive Mode: Not Supported 00:08:24.804 00:08:24.804 Health Information 00:08:24.804 ================== 00:08:24.804 Critical Warnings: 00:08:24.804 Available Spare Space: OK 00:08:24.804 Temperature: OK 00:08:24.804 Device Reliability: OK 00:08:24.804 Read Only: No 00:08:24.804 Volatile Memory Backup: OK 00:08:24.804 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.804 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.804 Available Spare: 0% 00:08:24.804 Available Spare Threshold: 0% 00:08:24.804 Life Percentage Used: 0% 00:08:24.804 Data Units Read: 1085 00:08:24.804 Data Units Written: 958 00:08:24.804 Host Read Commands: 55574 00:08:24.804 Host Write Commands: 54475 00:08:24.805 Controller Busy Time: 0 minutes 00:08:24.805 Power Cycles: 0 00:08:24.805 Power On Hours: 0 hours 00:08:24.805 Unsafe Shutdowns: 0 00:08:24.805 Unrecoverable Media Errors: 0 00:08:24.805 Lifetime Error Log Entries: 0 00:08:24.805 Warning Temperature Time: 0 minutes 00:08:24.805 Critical Temperature Time: 0 minutes 00:08:24.805 00:08:24.805 Number of Queues 00:08:24.805 ================ 00:08:24.805 Number of I/O Submission Queues: 64 00:08:24.805 Number of I/O Completion Queues: 64 00:08:24.805 00:08:24.805 ZNS Specific Controller Data 00:08:24.805 ============================ 00:08:24.805 Zone Append Size Limit: 0 00:08:24.805 00:08:24.805 00:08:24.805 Active Namespaces 00:08:24.805 ================= 00:08:24.805 Namespace ID:1 00:08:24.805 Error Recovery Timeout: Unlimited 00:08:24.805 Command Set Identifier: NVM (00h) 00:08:24.805 Deallocate: Supported 00:08:24.805 Deallocated/Unwritten Error: Supported 00:08:24.805 Deallocated Read Value: All 0x00 00:08:24.805 Deallocate in Write Zeroes: Not Supported 00:08:24.805 Deallocated Guard Field: 0xFFFF 00:08:24.805 Flush: Supported 00:08:24.805 Reservation: Not Supported 00:08:24.805 Namespace Sharing Capabilities: Private 00:08:24.805 Size (in LBAs): 1310720 (5GiB) 00:08:24.805 Capacity (in LBAs): 1310720 (5GiB) 00:08:24.805 Utilization (in LBAs): 1310720 (5GiB) 00:08:24.805 Thin Provisioning: Not Supported 00:08:24.805 Per-NS Atomic Units: No 00:08:24.805 Maximum Single Source Range Length: 128 00:08:24.805 Maximum Copy Length: 128 00:08:24.805 Maximum Source Range Count: 128 00:08:24.805 NGUID/EUI64 Never Reused: No 00:08:24.805 Namespace Write Protected: No 00:08:24.805 Number of LBA Formats: 8 00:08:24.805 Current LBA Format: LBA Format #04 00:08:24.805 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.805 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.805 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.805 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.805 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.805 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.805 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.805 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.805 00:08:24.805 NVM Specific Namespace Data 00:08:24.805 =========================== 00:08:24.805 Logical Block Storage Tag Mask: 0 00:08:24.805 Protection Information Capabilities: 00:08:24.805 16b Guard Protection Information Storage Tag Support: No 00:08:24.805 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.805 Storage Tag Check Read Support: No 00:08:24.805 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.805 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.805 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.805 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.805 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.805 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.805 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.805 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.805 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:24.805 10:02:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:24.805 ===================================================== 00:08:24.805 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.805 ===================================================== 00:08:24.805 Controller Capabilities/Features 00:08:24.805 ================================ 00:08:24.805 Vendor ID: 1b36 00:08:24.805 Subsystem Vendor ID: 1af4 00:08:24.805 Serial Number: 12342 00:08:24.805 Model Number: QEMU NVMe Ctrl 00:08:24.805 Firmware Version: 8.0.0 00:08:24.805 Recommended Arb Burst: 6 00:08:24.805 IEEE OUI Identifier: 00 54 52 00:08:24.805 Multi-path I/O 00:08:24.805 May have multiple subsystem ports: No 00:08:24.805 May have multiple controllers: No 00:08:24.805 Associated with SR-IOV VF: No 00:08:24.805 Max Data Transfer Size: 524288 00:08:24.805 Max Number of Namespaces: 256 00:08:24.805 Max Number of I/O Queues: 64 00:08:24.805 NVMe Specification Version (VS): 1.4 00:08:24.805 NVMe Specification Version (Identify): 1.4 00:08:24.805 Maximum Queue Entries: 2048 00:08:24.805 Contiguous Queues Required: Yes 00:08:24.805 Arbitration Mechanisms Supported 00:08:24.805 Weighted Round Robin: Not Supported 00:08:24.805 Vendor Specific: Not Supported 00:08:24.805 Reset Timeout: 7500 ms 00:08:24.805 Doorbell Stride: 4 bytes 00:08:24.805 NVM Subsystem Reset: Not Supported 00:08:24.805 Command Sets Supported 00:08:24.805 NVM Command Set: Supported 00:08:24.805 Boot Partition: Not Supported 00:08:24.805 Memory Page Size Minimum: 4096 bytes 00:08:24.805 Memory Page Size Maximum: 65536 bytes 00:08:24.805 Persistent Memory Region: Not Supported 00:08:24.805 Optional Asynchronous Events Supported 00:08:24.805 Namespace Attribute Notices: Supported 00:08:24.805 Firmware Activation Notices: Not Supported 00:08:24.805 ANA Change Notices: Not Supported 00:08:24.805 PLE Aggregate Log Change Notices: Not Supported 00:08:24.805 LBA Status Info Alert Notices: Not Supported 00:08:24.805 EGE Aggregate Log Change Notices: Not Supported 00:08:24.805 Normal NVM Subsystem Shutdown event: Not Supported 00:08:24.805 Zone Descriptor Change Notices: Not Supported 00:08:24.805 Discovery Log Change Notices: Not Supported 00:08:24.805 Controller Attributes 00:08:24.805 128-bit Host Identifier: Not Supported 00:08:24.805 Non-Operational Permissive Mode: Not Supported 00:08:24.805 NVM Sets: Not Supported 00:08:24.805 Read Recovery Levels: Not Supported 00:08:24.805 Endurance Groups: Not Supported 00:08:24.805 Predictable Latency Mode: Not Supported 00:08:24.805 Traffic Based Keep ALive: Not Supported 00:08:24.805 Namespace Granularity: Not Supported 00:08:24.805 SQ Associations: Not Supported 00:08:24.805 UUID List: Not Supported 00:08:24.805 Multi-Domain Subsystem: Not Supported 00:08:24.805 Fixed Capacity Management: Not Supported 00:08:24.805 Variable Capacity Management: Not Supported 00:08:24.805 Delete Endurance Group: Not Supported 00:08:24.805 Delete NVM Set: Not Supported 00:08:24.805 Extended LBA Formats Supported: Supported 00:08:24.805 Flexible Data Placement Supported: Not Supported 00:08:24.805 00:08:24.805 Controller Memory Buffer Support 00:08:24.805 ================================ 00:08:24.805 Supported: No 00:08:24.805 00:08:24.805 Persistent Memory Region Support 00:08:24.805 ================================ 00:08:24.805 Supported: No 00:08:24.805 00:08:24.805 Admin Command Set Attributes 00:08:24.805 ============================ 00:08:24.805 Security Send/Receive: Not Supported 00:08:24.805 Format NVM: Supported 00:08:24.805 Firmware Activate/Download: Not Supported 00:08:24.805 Namespace Management: Supported 00:08:24.805 Device Self-Test: Not Supported 00:08:24.805 Directives: Supported 00:08:24.805 NVMe-MI: Not Supported 00:08:24.805 Virtualization Management: Not Supported 00:08:24.805 Doorbell Buffer Config: Supported 00:08:24.805 Get LBA Status Capability: Not Supported 00:08:24.805 Command & Feature Lockdown Capability: Not Supported 00:08:24.805 Abort Command Limit: 4 00:08:24.805 Async Event Request Limit: 4 00:08:24.805 Number of Firmware Slots: N/A 00:08:24.805 Firmware Slot 1 Read-Only: N/A 00:08:24.805 Firmware Activation Without Reset: N/A 00:08:24.805 Multiple Update Detection Support: N/A 00:08:24.805 Firmware Update Granularity: No Information Provided 00:08:24.805 Per-Namespace SMART Log: Yes 00:08:24.806 Asymmetric Namespace Access Log Page: Not Supported 00:08:24.806 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:24.806 Command Effects Log Page: Supported 00:08:24.806 Get Log Page Extended Data: Supported 00:08:24.806 Telemetry Log Pages: Not Supported 00:08:24.806 Persistent Event Log Pages: Not Supported 00:08:24.806 Supported Log Pages Log Page: May Support 00:08:24.806 Commands Supported & Effects Log Page: Not Supported 00:08:24.806 Feature Identifiers & Effects Log Page:May Support 00:08:24.806 NVMe-MI Commands & Effects Log Page: May Support 00:08:24.806 Data Area 4 for Telemetry Log: Not Supported 00:08:24.806 Error Log Page Entries Supported: 1 00:08:24.806 Keep Alive: Not Supported 00:08:24.806 00:08:24.806 NVM Command Set Attributes 00:08:24.806 ========================== 00:08:24.806 Submission Queue Entry Size 00:08:24.806 Max: 64 00:08:24.806 Min: 64 00:08:24.806 Completion Queue Entry Size 00:08:24.806 Max: 16 00:08:24.806 Min: 16 00:08:24.806 Number of Namespaces: 256 00:08:24.806 Compare Command: Supported 00:08:24.806 Write Uncorrectable Command: Not Supported 00:08:24.806 Dataset Management Command: Supported 00:08:24.806 Write Zeroes Command: Supported 00:08:24.806 Set Features Save Field: Supported 00:08:24.806 Reservations: Not Supported 00:08:24.806 Timestamp: Supported 00:08:24.806 Copy: Supported 00:08:24.806 Volatile Write Cache: Present 00:08:24.806 Atomic Write Unit (Normal): 1 00:08:24.806 Atomic Write Unit (PFail): 1 00:08:24.806 Atomic Compare & Write Unit: 1 00:08:24.806 Fused Compare & Write: Not Supported 00:08:24.806 Scatter-Gather List 00:08:24.806 SGL Command Set: Supported 00:08:24.806 SGL Keyed: Not Supported 00:08:24.806 SGL Bit Bucket Descriptor: Not Supported 00:08:24.806 SGL Metadata Pointer: Not Supported 00:08:24.806 Oversized SGL: Not Supported 00:08:24.806 SGL Metadata Address: Not Supported 00:08:24.806 SGL Offset: Not Supported 00:08:24.806 Transport SGL Data Block: Not Supported 00:08:24.806 Replay Protected Memory Block: Not Supported 00:08:24.806 00:08:24.806 Firmware Slot Information 00:08:24.806 ========================= 00:08:24.806 Active slot: 1 00:08:24.806 Slot 1 Firmware Revision: 1.0 00:08:24.806 00:08:24.806 00:08:24.806 Commands Supported and Effects 00:08:24.806 ============================== 00:08:24.806 Admin Commands 00:08:24.806 -------------- 00:08:24.806 Delete I/O Submission Queue (00h): Supported 00:08:24.806 Create I/O Submission Queue (01h): Supported 00:08:24.806 Get Log Page (02h): Supported 00:08:24.806 Delete I/O Completion Queue (04h): Supported 00:08:24.806 Create I/O Completion Queue (05h): Supported 00:08:24.806 Identify (06h): Supported 00:08:24.806 Abort (08h): Supported 00:08:24.806 Set Features (09h): Supported 00:08:24.806 Get Features (0Ah): Supported 00:08:24.806 Asynchronous Event Request (0Ch): Supported 00:08:24.806 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:24.806 Directive Send (19h): Supported 00:08:24.806 Directive Receive (1Ah): Supported 00:08:24.806 Virtualization Management (1Ch): Supported 00:08:24.806 Doorbell Buffer Config (7Ch): Supported 00:08:24.806 Format NVM (80h): Supported LBA-Change 00:08:24.806 I/O Commands 00:08:24.806 ------------ 00:08:24.806 Flush (00h): Supported LBA-Change 00:08:24.806 Write (01h): Supported LBA-Change 00:08:24.806 Read (02h): Supported 00:08:24.806 Compare (05h): Supported 00:08:24.806 Write Zeroes (08h): Supported LBA-Change 00:08:24.806 Dataset Management (09h): Supported LBA-Change 00:08:24.806 Unknown (0Ch): Supported 00:08:24.806 Unknown (12h): Supported 00:08:24.806 Copy (19h): Supported LBA-Change 00:08:24.806 Unknown (1Dh): Supported LBA-Change 00:08:24.806 00:08:24.806 Error Log 00:08:24.806 ========= 00:08:24.806 00:08:24.806 Arbitration 00:08:24.806 =========== 00:08:24.806 Arbitration Burst: no limit 00:08:24.806 00:08:24.806 Power Management 00:08:24.806 ================ 00:08:24.806 Number of Power States: 1 00:08:24.806 Current Power State: Power State #0 00:08:24.806 Power State #0: 00:08:24.806 Max Power: 25.00 W 00:08:24.806 Non-Operational State: Operational 00:08:24.806 Entry Latency: 16 microseconds 00:08:24.806 Exit Latency: 4 microseconds 00:08:24.806 Relative Read Throughput: 0 00:08:24.806 Relative Read Latency: 0 00:08:24.806 Relative Write Throughput: 0 00:08:24.806 Relative Write Latency: 0 00:08:24.806 Idle Power: Not Reported 00:08:24.806 Active Power: Not Reported 00:08:24.806 Non-Operational Permissive Mode: Not Supported 00:08:24.806 00:08:24.806 Health Information 00:08:24.806 ================== 00:08:24.806 Critical Warnings: 00:08:24.806 Available Spare Space: OK 00:08:24.806 Temperature: OK 00:08:24.806 Device Reliability: OK 00:08:24.806 Read Only: No 00:08:24.806 Volatile Memory Backup: OK 00:08:24.806 Current Temperature: 323 Kelvin (50 Celsius) 00:08:24.806 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:24.806 Available Spare: 0% 00:08:24.806 Available Spare Threshold: 0% 00:08:24.806 Life Percentage Used: 0% 00:08:24.806 Data Units Read: 2243 00:08:24.806 Data Units Written: 2030 00:08:24.806 Host Read Commands: 112203 00:08:24.806 Host Write Commands: 110473 00:08:24.806 Controller Busy Time: 0 minutes 00:08:24.806 Power Cycles: 0 00:08:24.806 Power On Hours: 0 hours 00:08:24.806 Unsafe Shutdowns: 0 00:08:24.806 Unrecoverable Media Errors: 0 00:08:24.806 Lifetime Error Log Entries: 0 00:08:24.806 Warning Temperature Time: 0 minutes 00:08:24.806 Critical Temperature Time: 0 minutes 00:08:24.806 00:08:24.806 Number of Queues 00:08:24.806 ================ 00:08:24.806 Number of I/O Submission Queues: 64 00:08:24.806 Number of I/O Completion Queues: 64 00:08:24.806 00:08:24.806 ZNS Specific Controller Data 00:08:24.806 ============================ 00:08:24.806 Zone Append Size Limit: 0 00:08:24.806 00:08:24.806 00:08:24.806 Active Namespaces 00:08:24.806 ================= 00:08:24.806 Namespace ID:1 00:08:24.806 Error Recovery Timeout: Unlimited 00:08:24.806 Command Set Identifier: NVM (00h) 00:08:24.806 Deallocate: Supported 00:08:24.806 Deallocated/Unwritten Error: Supported 00:08:24.806 Deallocated Read Value: All 0x00 00:08:24.806 Deallocate in Write Zeroes: Not Supported 00:08:24.806 Deallocated Guard Field: 0xFFFF 00:08:24.806 Flush: Supported 00:08:24.806 Reservation: Not Supported 00:08:24.806 Namespace Sharing Capabilities: Private 00:08:24.806 Size (in LBAs): 1048576 (4GiB) 00:08:24.806 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.806 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.806 Thin Provisioning: Not Supported 00:08:24.806 Per-NS Atomic Units: No 00:08:24.806 Maximum Single Source Range Length: 128 00:08:24.806 Maximum Copy Length: 128 00:08:24.806 Maximum Source Range Count: 128 00:08:24.806 NGUID/EUI64 Never Reused: No 00:08:24.806 Namespace Write Protected: No 00:08:24.806 Number of LBA Formats: 8 00:08:24.806 Current LBA Format: LBA Format #04 00:08:24.806 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.806 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.807 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.807 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.807 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.807 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.807 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.807 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.807 00:08:24.807 NVM Specific Namespace Data 00:08:24.807 =========================== 00:08:24.807 Logical Block Storage Tag Mask: 0 00:08:24.807 Protection Information Capabilities: 00:08:24.807 16b Guard Protection Information Storage Tag Support: No 00:08:24.807 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.807 Storage Tag Check Read Support: No 00:08:24.807 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Namespace ID:2 00:08:24.807 Error Recovery Timeout: Unlimited 00:08:24.807 Command Set Identifier: NVM (00h) 00:08:24.807 Deallocate: Supported 00:08:24.807 Deallocated/Unwritten Error: Supported 00:08:24.807 Deallocated Read Value: All 0x00 00:08:24.807 Deallocate in Write Zeroes: Not Supported 00:08:24.807 Deallocated Guard Field: 0xFFFF 00:08:24.807 Flush: Supported 00:08:24.807 Reservation: Not Supported 00:08:24.807 Namespace Sharing Capabilities: Private 00:08:24.807 Size (in LBAs): 1048576 (4GiB) 00:08:24.807 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.807 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.807 Thin Provisioning: Not Supported 00:08:24.807 Per-NS Atomic Units: No 00:08:24.807 Maximum Single Source Range Length: 128 00:08:24.807 Maximum Copy Length: 128 00:08:24.807 Maximum Source Range Count: 128 00:08:24.807 NGUID/EUI64 Never Reused: No 00:08:24.807 Namespace Write Protected: No 00:08:24.807 Number of LBA Formats: 8 00:08:24.807 Current LBA Format: LBA Format #04 00:08:24.807 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.807 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.807 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.807 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.807 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.807 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.807 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.807 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.807 00:08:24.807 NVM Specific Namespace Data 00:08:24.807 =========================== 00:08:24.807 Logical Block Storage Tag Mask: 0 00:08:24.807 Protection Information Capabilities: 00:08:24.807 16b Guard Protection Information Storage Tag Support: No 00:08:24.807 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:24.807 Storage Tag Check Read Support: No 00:08:24.807 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:24.807 Namespace ID:3 00:08:24.807 Error Recovery Timeout: Unlimited 00:08:24.807 Command Set Identifier: NVM (00h) 00:08:24.807 Deallocate: Supported 00:08:24.807 Deallocated/Unwritten Error: Supported 00:08:24.807 Deallocated Read Value: All 0x00 00:08:24.807 Deallocate in Write Zeroes: Not Supported 00:08:24.807 Deallocated Guard Field: 0xFFFF 00:08:24.807 Flush: Supported 00:08:24.807 Reservation: Not Supported 00:08:24.807 Namespace Sharing Capabilities: Private 00:08:24.807 Size (in LBAs): 1048576 (4GiB) 00:08:24.807 Capacity (in LBAs): 1048576 (4GiB) 00:08:24.807 Utilization (in LBAs): 1048576 (4GiB) 00:08:24.807 Thin Provisioning: Not Supported 00:08:24.807 Per-NS Atomic Units: No 00:08:24.807 Maximum Single Source Range Length: 128 00:08:24.807 Maximum Copy Length: 128 00:08:24.807 Maximum Source Range Count: 128 00:08:24.807 NGUID/EUI64 Never Reused: No 00:08:24.807 Namespace Write Protected: No 00:08:24.807 Number of LBA Formats: 8 00:08:24.807 Current LBA Format: LBA Format #04 00:08:24.807 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:24.807 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:24.807 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:24.807 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:24.807 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:24.807 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:24.807 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:24.807 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:24.807 00:08:24.807 NVM Specific Namespace Data 00:08:24.807 =========================== 00:08:24.807 Logical Block Storage Tag Mask: 0 00:08:24.807 Protection Information Capabilities: 00:08:24.807 16b Guard Protection Information Storage Tag Support: No 00:08:24.807 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:25.074 Storage Tag Check Read Support: No 00:08:25.074 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.074 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.074 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.074 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.074 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.074 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.075 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.075 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.075 10:02:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:25.075 10:02:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:25.075 ===================================================== 00:08:25.075 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.075 ===================================================== 00:08:25.075 Controller Capabilities/Features 00:08:25.075 ================================ 00:08:25.075 Vendor ID: 1b36 00:08:25.075 Subsystem Vendor ID: 1af4 00:08:25.075 Serial Number: 12343 00:08:25.075 Model Number: QEMU NVMe Ctrl 00:08:25.075 Firmware Version: 8.0.0 00:08:25.075 Recommended Arb Burst: 6 00:08:25.075 IEEE OUI Identifier: 00 54 52 00:08:25.075 Multi-path I/O 00:08:25.075 May have multiple subsystem ports: No 00:08:25.075 May have multiple controllers: Yes 00:08:25.075 Associated with SR-IOV VF: No 00:08:25.075 Max Data Transfer Size: 524288 00:08:25.075 Max Number of Namespaces: 256 00:08:25.075 Max Number of I/O Queues: 64 00:08:25.075 NVMe Specification Version (VS): 1.4 00:08:25.075 NVMe Specification Version (Identify): 1.4 00:08:25.075 Maximum Queue Entries: 2048 00:08:25.075 Contiguous Queues Required: Yes 00:08:25.075 Arbitration Mechanisms Supported 00:08:25.075 Weighted Round Robin: Not Supported 00:08:25.075 Vendor Specific: Not Supported 00:08:25.075 Reset Timeout: 7500 ms 00:08:25.075 Doorbell Stride: 4 bytes 00:08:25.075 NVM Subsystem Reset: Not Supported 00:08:25.075 Command Sets Supported 00:08:25.075 NVM Command Set: Supported 00:08:25.075 Boot Partition: Not Supported 00:08:25.075 Memory Page Size Minimum: 4096 bytes 00:08:25.075 Memory Page Size Maximum: 65536 bytes 00:08:25.075 Persistent Memory Region: Not Supported 00:08:25.075 Optional Asynchronous Events Supported 00:08:25.075 Namespace Attribute Notices: Supported 00:08:25.075 Firmware Activation Notices: Not Supported 00:08:25.075 ANA Change Notices: Not Supported 00:08:25.075 PLE Aggregate Log Change Notices: Not Supported 00:08:25.075 LBA Status Info Alert Notices: Not Supported 00:08:25.075 EGE Aggregate Log Change Notices: Not Supported 00:08:25.075 Normal NVM Subsystem Shutdown event: Not Supported 00:08:25.075 Zone Descriptor Change Notices: Not Supported 00:08:25.075 Discovery Log Change Notices: Not Supported 00:08:25.075 Controller Attributes 00:08:25.075 128-bit Host Identifier: Not Supported 00:08:25.075 Non-Operational Permissive Mode: Not Supported 00:08:25.075 NVM Sets: Not Supported 00:08:25.075 Read Recovery Levels: Not Supported 00:08:25.075 Endurance Groups: Supported 00:08:25.075 Predictable Latency Mode: Not Supported 00:08:25.075 Traffic Based Keep ALive: Not Supported 00:08:25.075 Namespace Granularity: Not Supported 00:08:25.075 SQ Associations: Not Supported 00:08:25.075 UUID List: Not Supported 00:08:25.075 Multi-Domain Subsystem: Not Supported 00:08:25.075 Fixed Capacity Management: Not Supported 00:08:25.075 Variable Capacity Management: Not Supported 00:08:25.075 Delete Endurance Group: Not Supported 00:08:25.075 Delete NVM Set: Not Supported 00:08:25.075 Extended LBA Formats Supported: Supported 00:08:25.075 Flexible Data Placement Supported: Supported 00:08:25.075 00:08:25.075 Controller Memory Buffer Support 00:08:25.075 ================================ 00:08:25.075 Supported: No 00:08:25.075 00:08:25.075 Persistent Memory Region Support 00:08:25.075 ================================ 00:08:25.075 Supported: No 00:08:25.075 00:08:25.075 Admin Command Set Attributes 00:08:25.075 ============================ 00:08:25.075 Security Send/Receive: Not Supported 00:08:25.075 Format NVM: Supported 00:08:25.075 Firmware Activate/Download: Not Supported 00:08:25.075 Namespace Management: Supported 00:08:25.075 Device Self-Test: Not Supported 00:08:25.075 Directives: Supported 00:08:25.075 NVMe-MI: Not Supported 00:08:25.075 Virtualization Management: Not Supported 00:08:25.075 Doorbell Buffer Config: Supported 00:08:25.075 Get LBA Status Capability: Not Supported 00:08:25.075 Command & Feature Lockdown Capability: Not Supported 00:08:25.075 Abort Command Limit: 4 00:08:25.075 Async Event Request Limit: 4 00:08:25.075 Number of Firmware Slots: N/A 00:08:25.075 Firmware Slot 1 Read-Only: N/A 00:08:25.075 Firmware Activation Without Reset: N/A 00:08:25.075 Multiple Update Detection Support: N/A 00:08:25.075 Firmware Update Granularity: No Information Provided 00:08:25.075 Per-Namespace SMART Log: Yes 00:08:25.075 Asymmetric Namespace Access Log Page: Not Supported 00:08:25.075 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:25.075 Command Effects Log Page: Supported 00:08:25.075 Get Log Page Extended Data: Supported 00:08:25.075 Telemetry Log Pages: Not Supported 00:08:25.075 Persistent Event Log Pages: Not Supported 00:08:25.075 Supported Log Pages Log Page: May Support 00:08:25.075 Commands Supported & Effects Log Page: Not Supported 00:08:25.075 Feature Identifiers & Effects Log Page:May Support 00:08:25.075 NVMe-MI Commands & Effects Log Page: May Support 00:08:25.075 Data Area 4 for Telemetry Log: Not Supported 00:08:25.075 Error Log Page Entries Supported: 1 00:08:25.075 Keep Alive: Not Supported 00:08:25.075 00:08:25.075 NVM Command Set Attributes 00:08:25.075 ========================== 00:08:25.075 Submission Queue Entry Size 00:08:25.075 Max: 64 00:08:25.075 Min: 64 00:08:25.075 Completion Queue Entry Size 00:08:25.075 Max: 16 00:08:25.075 Min: 16 00:08:25.075 Number of Namespaces: 256 00:08:25.075 Compare Command: Supported 00:08:25.075 Write Uncorrectable Command: Not Supported 00:08:25.075 Dataset Management Command: Supported 00:08:25.075 Write Zeroes Command: Supported 00:08:25.075 Set Features Save Field: Supported 00:08:25.075 Reservations: Not Supported 00:08:25.075 Timestamp: Supported 00:08:25.075 Copy: Supported 00:08:25.075 Volatile Write Cache: Present 00:08:25.075 Atomic Write Unit (Normal): 1 00:08:25.075 Atomic Write Unit (PFail): 1 00:08:25.075 Atomic Compare & Write Unit: 1 00:08:25.075 Fused Compare & Write: Not Supported 00:08:25.075 Scatter-Gather List 00:08:25.075 SGL Command Set: Supported 00:08:25.075 SGL Keyed: Not Supported 00:08:25.075 SGL Bit Bucket Descriptor: Not Supported 00:08:25.075 SGL Metadata Pointer: Not Supported 00:08:25.075 Oversized SGL: Not Supported 00:08:25.075 SGL Metadata Address: Not Supported 00:08:25.075 SGL Offset: Not Supported 00:08:25.075 Transport SGL Data Block: Not Supported 00:08:25.075 Replay Protected Memory Block: Not Supported 00:08:25.075 00:08:25.075 Firmware Slot Information 00:08:25.075 ========================= 00:08:25.075 Active slot: 1 00:08:25.075 Slot 1 Firmware Revision: 1.0 00:08:25.075 00:08:25.075 00:08:25.075 Commands Supported and Effects 00:08:25.075 ============================== 00:08:25.075 Admin Commands 00:08:25.075 -------------- 00:08:25.075 Delete I/O Submission Queue (00h): Supported 00:08:25.075 Create I/O Submission Queue (01h): Supported 00:08:25.075 Get Log Page (02h): Supported 00:08:25.075 Delete I/O Completion Queue (04h): Supported 00:08:25.075 Create I/O Completion Queue (05h): Supported 00:08:25.075 Identify (06h): Supported 00:08:25.075 Abort (08h): Supported 00:08:25.075 Set Features (09h): Supported 00:08:25.076 Get Features (0Ah): Supported 00:08:25.076 Asynchronous Event Request (0Ch): Supported 00:08:25.076 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:25.076 Directive Send (19h): Supported 00:08:25.076 Directive Receive (1Ah): Supported 00:08:25.076 Virtualization Management (1Ch): Supported 00:08:25.076 Doorbell Buffer Config (7Ch): Supported 00:08:25.076 Format NVM (80h): Supported LBA-Change 00:08:25.076 I/O Commands 00:08:25.076 ------------ 00:08:25.076 Flush (00h): Supported LBA-Change 00:08:25.076 Write (01h): Supported LBA-Change 00:08:25.076 Read (02h): Supported 00:08:25.076 Compare (05h): Supported 00:08:25.076 Write Zeroes (08h): Supported LBA-Change 00:08:25.076 Dataset Management (09h): Supported LBA-Change 00:08:25.076 Unknown (0Ch): Supported 00:08:25.076 Unknown (12h): Supported 00:08:25.076 Copy (19h): Supported LBA-Change 00:08:25.076 Unknown (1Dh): Supported LBA-Change 00:08:25.076 00:08:25.076 Error Log 00:08:25.076 ========= 00:08:25.076 00:08:25.076 Arbitration 00:08:25.076 =========== 00:08:25.076 Arbitration Burst: no limit 00:08:25.076 00:08:25.076 Power Management 00:08:25.076 ================ 00:08:25.076 Number of Power States: 1 00:08:25.076 Current Power State: Power State #0 00:08:25.076 Power State #0: 00:08:25.076 Max Power: 25.00 W 00:08:25.076 Non-Operational State: Operational 00:08:25.076 Entry Latency: 16 microseconds 00:08:25.076 Exit Latency: 4 microseconds 00:08:25.076 Relative Read Throughput: 0 00:08:25.076 Relative Read Latency: 0 00:08:25.076 Relative Write Throughput: 0 00:08:25.076 Relative Write Latency: 0 00:08:25.076 Idle Power: Not Reported 00:08:25.076 Active Power: Not Reported 00:08:25.076 Non-Operational Permissive Mode: Not Supported 00:08:25.076 00:08:25.076 Health Information 00:08:25.076 ================== 00:08:25.076 Critical Warnings: 00:08:25.076 Available Spare Space: OK 00:08:25.076 Temperature: OK 00:08:25.076 Device Reliability: OK 00:08:25.076 Read Only: No 00:08:25.076 Volatile Memory Backup: OK 00:08:25.076 Current Temperature: 323 Kelvin (50 Celsius) 00:08:25.076 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:25.076 Available Spare: 0% 00:08:25.076 Available Spare Threshold: 0% 00:08:25.076 Life Percentage Used: 0% 00:08:25.076 Data Units Read: 862 00:08:25.076 Data Units Written: 791 00:08:25.076 Host Read Commands: 38435 00:08:25.076 Host Write Commands: 37858 00:08:25.076 Controller Busy Time: 0 minutes 00:08:25.076 Power Cycles: 0 00:08:25.076 Power On Hours: 0 hours 00:08:25.076 Unsafe Shutdowns: 0 00:08:25.076 Unrecoverable Media Errors: 0 00:08:25.076 Lifetime Error Log Entries: 0 00:08:25.076 Warning Temperature Time: 0 minutes 00:08:25.076 Critical Temperature Time: 0 minutes 00:08:25.076 00:08:25.076 Number of Queues 00:08:25.076 ================ 00:08:25.076 Number of I/O Submission Queues: 64 00:08:25.076 Number of I/O Completion Queues: 64 00:08:25.076 00:08:25.076 ZNS Specific Controller Data 00:08:25.076 ============================ 00:08:25.076 Zone Append Size Limit: 0 00:08:25.076 00:08:25.076 00:08:25.076 Active Namespaces 00:08:25.076 ================= 00:08:25.076 Namespace ID:1 00:08:25.076 Error Recovery Timeout: Unlimited 00:08:25.076 Command Set Identifier: NVM (00h) 00:08:25.076 Deallocate: Supported 00:08:25.076 Deallocated/Unwritten Error: Supported 00:08:25.076 Deallocated Read Value: All 0x00 00:08:25.076 Deallocate in Write Zeroes: Not Supported 00:08:25.076 Deallocated Guard Field: 0xFFFF 00:08:25.076 Flush: Supported 00:08:25.076 Reservation: Not Supported 00:08:25.076 Namespace Sharing Capabilities: Multiple Controllers 00:08:25.076 Size (in LBAs): 262144 (1GiB) 00:08:25.076 Capacity (in LBAs): 262144 (1GiB) 00:08:25.076 Utilization (in LBAs): 262144 (1GiB) 00:08:25.076 Thin Provisioning: Not Supported 00:08:25.076 Per-NS Atomic Units: No 00:08:25.076 Maximum Single Source Range Length: 128 00:08:25.076 Maximum Copy Length: 128 00:08:25.076 Maximum Source Range Count: 128 00:08:25.076 NGUID/EUI64 Never Reused: No 00:08:25.076 Namespace Write Protected: No 00:08:25.076 Endurance group ID: 1 00:08:25.076 Number of LBA Formats: 8 00:08:25.076 Current LBA Format: LBA Format #04 00:08:25.076 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:25.076 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:25.076 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:25.076 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:25.076 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:25.076 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:25.076 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:25.076 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:25.076 00:08:25.076 Get Feature FDP: 00:08:25.076 ================ 00:08:25.076 Enabled: Yes 00:08:25.076 FDP configuration index: 0 00:08:25.076 00:08:25.076 FDP configurations log page 00:08:25.076 =========================== 00:08:25.076 Number of FDP configurations: 1 00:08:25.076 Version: 0 00:08:25.076 Size: 112 00:08:25.076 FDP Configuration Descriptor: 0 00:08:25.076 Descriptor Size: 96 00:08:25.076 Reclaim Group Identifier format: 2 00:08:25.076 FDP Volatile Write Cache: Not Present 00:08:25.076 FDP Configuration: Valid 00:08:25.076 Vendor Specific Size: 0 00:08:25.076 Number of Reclaim Groups: 2 00:08:25.076 Number of Recalim Unit Handles: 8 00:08:25.076 Max Placement Identifiers: 128 00:08:25.076 Number of Namespaces Suppprted: 256 00:08:25.076 Reclaim unit Nominal Size: 6000000 bytes 00:08:25.076 Estimated Reclaim Unit Time Limit: Not Reported 00:08:25.076 RUH Desc #000: RUH Type: Initially Isolated 00:08:25.076 RUH Desc #001: RUH Type: Initially Isolated 00:08:25.076 RUH Desc #002: RUH Type: Initially Isolated 00:08:25.076 RUH Desc #003: RUH Type: Initially Isolated 00:08:25.076 RUH Desc #004: RUH Type: Initially Isolated 00:08:25.076 RUH Desc #005: RUH Type: Initially Isolated 00:08:25.076 RUH Desc #006: RUH Type: Initially Isolated 00:08:25.076 RUH Desc #007: RUH Type: Initially Isolated 00:08:25.076 00:08:25.076 FDP reclaim unit handle usage log page 00:08:25.076 ====================================== 00:08:25.076 Number of Reclaim Unit Handles: 8 00:08:25.076 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:25.076 RUH Usage Desc #001: RUH Attributes: Unused 00:08:25.076 RUH Usage Desc #002: RUH Attributes: Unused 00:08:25.076 RUH Usage Desc #003: RUH Attributes: Unused 00:08:25.076 RUH Usage Desc #004: RUH Attributes: Unused 00:08:25.076 RUH Usage Desc #005: RUH Attributes: Unused 00:08:25.076 RUH Usage Desc #006: RUH Attributes: Unused 00:08:25.076 RUH Usage Desc #007: RUH Attributes: Unused 00:08:25.076 00:08:25.076 FDP statistics log page 00:08:25.076 ======================= 00:08:25.076 Host bytes with metadata written: 504078336 00:08:25.076 Media bytes with metadata written: 504135680 00:08:25.076 Media bytes erased: 0 00:08:25.076 00:08:25.076 FDP events log page 00:08:25.076 =================== 00:08:25.076 Number of FDP events: 0 00:08:25.076 00:08:25.076 NVM Specific Namespace Data 00:08:25.076 =========================== 00:08:25.077 Logical Block Storage Tag Mask: 0 00:08:25.077 Protection Information Capabilities: 00:08:25.077 16b Guard Protection Information Storage Tag Support: No 00:08:25.077 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:25.077 Storage Tag Check Read Support: No 00:08:25.077 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.077 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.077 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.077 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.077 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.077 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.077 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.077 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:25.077 00:08:25.077 real 0m1.027s 00:08:25.077 user 0m0.338s 00:08:25.077 sys 0m0.463s 00:08:25.077 10:02:53 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.077 10:02:53 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:25.077 ************************************ 00:08:25.077 END TEST nvme_identify 00:08:25.077 ************************************ 00:08:25.077 10:02:53 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:25.077 10:02:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.077 10:02:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.077 10:02:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.077 ************************************ 00:08:25.077 START TEST nvme_perf 00:08:25.077 ************************************ 00:08:25.077 10:02:53 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:25.077 10:02:53 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:26.493 Initializing NVMe Controllers 00:08:26.493 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.493 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.493 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.493 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.493 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:26.493 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:26.493 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:26.493 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:26.493 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:26.493 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:26.493 Initialization complete. Launching workers. 00:08:26.493 ======================================================== 00:08:26.493 Latency(us) 00:08:26.493 Device Information : IOPS MiB/s Average min max 00:08:26.493 PCIE (0000:00:11.0) NSID 1 from core 0: 7461.36 87.44 17169.38 9260.38 36573.78 00:08:26.493 PCIE (0000:00:13.0) NSID 1 from core 0: 7461.36 87.44 17157.96 8762.42 35825.45 00:08:26.493 PCIE (0000:00:10.0) NSID 1 from core 0: 7461.36 87.44 17138.35 7414.27 35594.00 00:08:26.493 PCIE (0000:00:12.0) NSID 1 from core 0: 7461.36 87.44 17120.01 7080.77 35247.63 00:08:26.493 PCIE (0000:00:12.0) NSID 2 from core 0: 7461.36 87.44 17100.82 5127.70 35918.17 00:08:26.493 PCIE (0000:00:12.0) NSID 3 from core 0: 7525.14 88.19 16936.78 4676.69 28150.75 00:08:26.493 ======================================================== 00:08:26.493 Total : 44831.95 525.37 17103.64 4676.69 36573.78 00:08:26.493 00:08:26.493 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:26.493 ================================================================================= 00:08:26.493 1.00000% : 13812.972us 00:08:26.493 10.00000% : 14922.043us 00:08:26.493 25.00000% : 15627.815us 00:08:26.493 50.00000% : 16736.886us 00:08:26.493 75.00000% : 18249.255us 00:08:26.493 90.00000% : 19862.449us 00:08:26.493 95.00000% : 20769.871us 00:08:26.493 98.00000% : 23088.837us 00:08:26.493 99.00000% : 28230.892us 00:08:26.493 99.50000% : 35893.563us 00:08:26.493 99.90000% : 36498.511us 00:08:26.493 99.99000% : 36700.160us 00:08:26.493 99.99900% : 36700.160us 00:08:26.493 99.99990% : 36700.160us 00:08:26.493 99.99999% : 36700.160us 00:08:26.493 00:08:26.493 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:26.493 ================================================================================= 00:08:26.493 1.00000% : 13712.148us 00:08:26.493 10.00000% : 14922.043us 00:08:26.493 25.00000% : 15627.815us 00:08:26.493 50.00000% : 16837.711us 00:08:26.493 75.00000% : 18249.255us 00:08:26.493 90.00000% : 19862.449us 00:08:26.493 95.00000% : 20669.046us 00:08:26.493 98.00000% : 22786.363us 00:08:26.493 99.00000% : 28230.892us 00:08:26.493 99.50000% : 35288.615us 00:08:26.493 99.90000% : 35691.914us 00:08:26.493 99.99000% : 35893.563us 00:08:26.493 99.99900% : 35893.563us 00:08:26.493 99.99990% : 35893.563us 00:08:26.493 99.99999% : 35893.563us 00:08:26.493 00:08:26.493 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:26.493 ================================================================================= 00:08:26.493 1.00000% : 13611.323us 00:08:26.493 10.00000% : 14821.218us 00:08:26.493 25.00000% : 15627.815us 00:08:26.493 50.00000% : 16736.886us 00:08:26.493 75.00000% : 18249.255us 00:08:26.493 90.00000% : 19862.449us 00:08:26.493 95.00000% : 20870.695us 00:08:26.493 98.00000% : 22483.889us 00:08:26.493 99.00000% : 27827.594us 00:08:26.493 99.50000% : 34885.317us 00:08:26.493 99.90000% : 35691.914us 00:08:26.493 99.99000% : 35691.914us 00:08:26.493 99.99900% : 35691.914us 00:08:26.493 99.99990% : 35691.914us 00:08:26.493 99.99999% : 35691.914us 00:08:26.493 00:08:26.493 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:26.493 ================================================================================= 00:08:26.493 1.00000% : 13812.972us 00:08:26.493 10.00000% : 14821.218us 00:08:26.493 25.00000% : 15526.991us 00:08:26.493 50.00000% : 16736.886us 00:08:26.493 75.00000% : 18350.080us 00:08:26.493 90.00000% : 19862.449us 00:08:26.493 95.00000% : 20769.871us 00:08:26.493 98.00000% : 22181.415us 00:08:26.493 99.00000% : 27424.295us 00:08:26.493 99.50000% : 34683.668us 00:08:26.493 99.90000% : 35288.615us 00:08:26.493 99.99000% : 35288.615us 00:08:26.493 99.99900% : 35288.615us 00:08:26.493 99.99990% : 35288.615us 00:08:26.493 99.99999% : 35288.615us 00:08:26.493 00:08:26.493 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:26.493 ================================================================================= 00:08:26.493 1.00000% : 12098.954us 00:08:26.493 10.00000% : 14821.218us 00:08:26.493 25.00000% : 15627.815us 00:08:26.493 50.00000% : 16736.886us 00:08:26.493 75.00000% : 18249.255us 00:08:26.493 90.00000% : 19963.274us 00:08:26.493 95.00000% : 20669.046us 00:08:26.493 98.00000% : 23189.662us 00:08:26.493 99.00000% : 28029.243us 00:08:26.493 99.50000% : 35288.615us 00:08:26.493 99.90000% : 35893.563us 00:08:26.493 99.99000% : 36095.212us 00:08:26.493 99.99900% : 36095.212us 00:08:26.493 99.99990% : 36095.212us 00:08:26.493 99.99999% : 36095.212us 00:08:26.493 00:08:26.493 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:26.494 ================================================================================= 00:08:26.494 1.00000% : 10687.409us 00:08:26.494 10.00000% : 14922.043us 00:08:26.494 25.00000% : 15526.991us 00:08:26.494 50.00000% : 16636.062us 00:08:26.494 75.00000% : 18350.080us 00:08:26.494 90.00000% : 19761.625us 00:08:26.494 95.00000% : 20568.222us 00:08:26.494 98.00000% : 21576.468us 00:08:26.494 99.00000% : 23996.258us 00:08:26.494 99.50000% : 27625.945us 00:08:26.494 99.90000% : 28029.243us 00:08:26.494 99.99000% : 28230.892us 00:08:26.494 99.99900% : 28230.892us 00:08:26.494 99.99990% : 28230.892us 00:08:26.494 99.99999% : 28230.892us 00:08:26.494 00:08:26.494 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:26.494 ============================================================================== 00:08:26.494 Range in us Cumulative IO count 00:08:26.494 9225.452 - 9275.865: 0.0534% ( 4) 00:08:26.494 9275.865 - 9326.277: 0.1603% ( 8) 00:08:26.494 9326.277 - 9376.689: 0.2270% ( 5) 00:08:26.494 9376.689 - 9427.102: 0.2671% ( 3) 00:08:26.494 9427.102 - 9477.514: 0.3472% ( 6) 00:08:26.494 9477.514 - 9527.926: 0.4006% ( 4) 00:08:26.494 9527.926 - 9578.338: 0.4541% ( 4) 00:08:26.494 9578.338 - 9628.751: 0.5075% ( 4) 00:08:26.494 9628.751 - 9679.163: 0.5342% ( 2) 00:08:26.494 9679.163 - 9729.575: 0.5609% ( 2) 00:08:26.494 9729.575 - 9779.988: 0.5876% ( 2) 00:08:26.494 9779.988 - 9830.400: 0.6143% ( 2) 00:08:26.494 9830.400 - 9880.812: 0.6410% ( 2) 00:08:26.494 9880.812 - 9931.225: 0.6677% ( 2) 00:08:26.494 9931.225 - 9981.637: 0.7078% ( 3) 00:08:26.494 9981.637 - 10032.049: 0.7345% ( 2) 00:08:26.494 10032.049 - 10082.462: 0.7612% ( 2) 00:08:26.494 10082.462 - 10132.874: 0.7879% ( 2) 00:08:26.494 10183.286 - 10233.698: 0.8146% ( 2) 00:08:26.494 10233.698 - 10284.111: 0.8413% ( 2) 00:08:26.494 10284.111 - 10334.523: 0.8547% ( 1) 00:08:26.494 13510.498 - 13611.323: 0.9215% ( 5) 00:08:26.494 13611.323 - 13712.148: 0.9882% ( 5) 00:08:26.494 13712.148 - 13812.972: 1.0684% ( 6) 00:08:26.494 13812.972 - 13913.797: 1.3088% ( 18) 00:08:26.494 13913.797 - 14014.622: 1.6293% ( 24) 00:08:26.494 14014.622 - 14115.446: 2.0433% ( 31) 00:08:26.494 14115.446 - 14216.271: 2.5374% ( 37) 00:08:26.494 14216.271 - 14317.095: 3.1116% ( 43) 00:08:26.494 14317.095 - 14417.920: 4.0064% ( 67) 00:08:26.494 14417.920 - 14518.745: 5.1149% ( 83) 00:08:26.494 14518.745 - 14619.569: 6.4503% ( 100) 00:08:26.494 14619.569 - 14720.394: 7.8926% ( 108) 00:08:26.494 14720.394 - 14821.218: 9.6822% ( 134) 00:08:26.494 14821.218 - 14922.043: 11.5919% ( 143) 00:08:26.494 14922.043 - 15022.868: 13.5684% ( 148) 00:08:26.494 15022.868 - 15123.692: 15.6384% ( 155) 00:08:26.494 15123.692 - 15224.517: 17.6149% ( 148) 00:08:26.494 15224.517 - 15325.342: 19.9653% ( 176) 00:08:26.494 15325.342 - 15426.166: 22.1421% ( 163) 00:08:26.494 15426.166 - 15526.991: 24.3056% ( 162) 00:08:26.494 15526.991 - 15627.815: 26.6426% ( 175) 00:08:26.494 15627.815 - 15728.640: 29.1800% ( 190) 00:08:26.494 15728.640 - 15829.465: 31.6373% ( 184) 00:08:26.494 15829.465 - 15930.289: 33.8542% ( 166) 00:08:26.494 15930.289 - 16031.114: 36.0310% ( 163) 00:08:26.494 16031.114 - 16131.938: 37.9541% ( 144) 00:08:26.494 16131.938 - 16232.763: 40.0107% ( 154) 00:08:26.494 16232.763 - 16333.588: 42.0540% ( 153) 00:08:26.494 16333.588 - 16434.412: 43.8435% ( 134) 00:08:26.494 16434.412 - 16535.237: 46.0069% ( 162) 00:08:26.494 16535.237 - 16636.062: 48.2639% ( 169) 00:08:26.494 16636.062 - 16736.886: 50.4006% ( 160) 00:08:26.494 16736.886 - 16837.711: 52.3237% ( 144) 00:08:26.494 16837.711 - 16938.535: 54.5005% ( 163) 00:08:26.494 16938.535 - 17039.360: 56.5171% ( 151) 00:08:26.494 17039.360 - 17140.185: 58.5871% ( 155) 00:08:26.494 17140.185 - 17241.009: 60.4834% ( 142) 00:08:26.494 17241.009 - 17341.834: 62.2196% ( 130) 00:08:26.494 17341.834 - 17442.658: 63.9690% ( 131) 00:08:26.494 17442.658 - 17543.483: 65.5315% ( 117) 00:08:26.494 17543.483 - 17644.308: 67.1207% ( 119) 00:08:26.494 17644.308 - 17745.132: 68.7500% ( 122) 00:08:26.494 17745.132 - 17845.957: 70.2591% ( 113) 00:08:26.494 17845.957 - 17946.782: 71.5144% ( 94) 00:08:26.494 17946.782 - 18047.606: 72.8365% ( 99) 00:08:26.494 18047.606 - 18148.431: 74.1052% ( 95) 00:08:26.494 18148.431 - 18249.255: 75.4674% ( 102) 00:08:26.494 18249.255 - 18350.080: 76.7762% ( 98) 00:08:26.494 18350.080 - 18450.905: 77.8980% ( 84) 00:08:26.494 18450.905 - 18551.729: 79.0999% ( 90) 00:08:26.494 18551.729 - 18652.554: 80.3018% ( 90) 00:08:26.494 18652.554 - 18753.378: 81.3835% ( 81) 00:08:26.494 18753.378 - 18854.203: 82.3451% ( 72) 00:08:26.494 18854.203 - 18955.028: 83.2933% ( 71) 00:08:26.494 18955.028 - 19055.852: 84.3082% ( 76) 00:08:26.494 19055.852 - 19156.677: 85.2163% ( 68) 00:08:26.494 19156.677 - 19257.502: 86.0443% ( 62) 00:08:26.494 19257.502 - 19358.326: 86.8189% ( 58) 00:08:26.494 19358.326 - 19459.151: 87.4599% ( 48) 00:08:26.494 19459.151 - 19559.975: 88.2612% ( 60) 00:08:26.494 19559.975 - 19660.800: 88.9824% ( 54) 00:08:26.494 19660.800 - 19761.625: 89.4631% ( 36) 00:08:26.494 19761.625 - 19862.449: 90.1042% ( 48) 00:08:26.494 19862.449 - 19963.274: 90.7185% ( 46) 00:08:26.494 19963.274 - 20064.098: 91.2927% ( 43) 00:08:26.494 20064.098 - 20164.923: 91.9605% ( 50) 00:08:26.494 20164.923 - 20265.748: 92.6549% ( 52) 00:08:26.494 20265.748 - 20366.572: 93.2425% ( 44) 00:08:26.494 20366.572 - 20467.397: 93.8168% ( 43) 00:08:26.494 20467.397 - 20568.222: 94.2975% ( 36) 00:08:26.494 20568.222 - 20669.046: 94.8317% ( 40) 00:08:26.494 20669.046 - 20769.871: 95.3392% ( 38) 00:08:26.494 20769.871 - 20870.695: 95.7399% ( 30) 00:08:26.494 20870.695 - 20971.520: 96.1138% ( 28) 00:08:26.494 20971.520 - 21072.345: 96.3942% ( 21) 00:08:26.494 21072.345 - 21173.169: 96.6346% ( 18) 00:08:26.494 21173.169 - 21273.994: 96.8349% ( 15) 00:08:26.494 21273.994 - 21374.818: 96.9818% ( 11) 00:08:26.494 21374.818 - 21475.643: 97.1154% ( 10) 00:08:26.494 21475.643 - 21576.468: 97.2623% ( 11) 00:08:26.494 21576.468 - 21677.292: 97.3691% ( 8) 00:08:26.494 21677.292 - 21778.117: 97.4359% ( 5) 00:08:26.494 22181.415 - 22282.240: 97.4493% ( 1) 00:08:26.494 22282.240 - 22383.065: 97.5027% ( 4) 00:08:26.494 22383.065 - 22483.889: 97.5694% ( 5) 00:08:26.494 22483.889 - 22584.714: 97.6763% ( 8) 00:08:26.494 22584.714 - 22685.538: 97.7431% ( 5) 00:08:26.494 22685.538 - 22786.363: 97.8098% ( 5) 00:08:26.494 22786.363 - 22887.188: 97.8766% ( 5) 00:08:26.494 22887.188 - 22988.012: 97.9434% ( 5) 00:08:26.494 22988.012 - 23088.837: 98.0235% ( 6) 00:08:26.494 23088.837 - 23189.662: 98.0769% ( 4) 00:08:26.494 23189.662 - 23290.486: 98.1437% ( 5) 00:08:26.494 23290.486 - 23391.311: 98.2105% ( 5) 00:08:26.494 23391.311 - 23492.135: 98.2772% ( 5) 00:08:26.494 23492.135 - 23592.960: 98.2906% ( 1) 00:08:26.494 27020.997 - 27222.646: 98.3574% ( 5) 00:08:26.494 27222.646 - 27424.295: 98.5176% ( 12) 00:08:26.494 27424.295 - 27625.945: 98.6645% ( 11) 00:08:26.494 27625.945 - 27827.594: 98.7981% ( 10) 00:08:26.494 27827.594 - 28029.243: 98.9450% ( 11) 00:08:26.494 28029.243 - 28230.892: 99.0919% ( 11) 00:08:26.494 28230.892 - 28432.542: 99.1453% ( 4) 00:08:26.494 35086.966 - 35288.615: 99.1987% ( 4) 00:08:26.494 35288.615 - 35490.265: 99.3323% ( 10) 00:08:26.494 35490.265 - 35691.914: 99.4525% ( 9) 00:08:26.494 35691.914 - 35893.563: 99.5994% ( 11) 00:08:26.494 35893.563 - 36095.212: 99.7062% ( 8) 00:08:26.494 36095.212 - 36296.862: 99.8397% ( 10) 00:08:26.494 36296.862 - 36498.511: 99.9599% ( 9) 00:08:26.494 36498.511 - 36700.160: 100.0000% ( 3) 00:08:26.494 00:08:26.494 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:26.494 ============================================================================== 00:08:26.494 Range in us Cumulative IO count 00:08:26.494 8721.329 - 8771.742: 0.0134% ( 1) 00:08:26.494 8771.742 - 8822.154: 0.1335% ( 9) 00:08:26.494 8822.154 - 8872.566: 0.1736% ( 3) 00:08:26.494 8872.566 - 8922.978: 0.2270% ( 4) 00:08:26.495 8922.978 - 8973.391: 0.2804% ( 4) 00:08:26.495 8973.391 - 9023.803: 0.3339% ( 4) 00:08:26.495 9023.803 - 9074.215: 0.3873% ( 4) 00:08:26.495 9074.215 - 9124.628: 0.4407% ( 4) 00:08:26.495 9124.628 - 9175.040: 0.4941% ( 4) 00:08:26.495 9175.040 - 9225.452: 0.5342% ( 3) 00:08:26.495 9225.452 - 9275.865: 0.5876% ( 4) 00:08:26.495 9275.865 - 9326.277: 0.6410% ( 4) 00:08:26.495 9326.277 - 9376.689: 0.6811% ( 3) 00:08:26.495 9376.689 - 9427.102: 0.7212% ( 3) 00:08:26.495 9427.102 - 9477.514: 0.7479% ( 2) 00:08:26.495 9477.514 - 9527.926: 0.8013% ( 4) 00:08:26.495 9527.926 - 9578.338: 0.8413% ( 3) 00:08:26.495 9578.338 - 9628.751: 0.8547% ( 1) 00:08:26.495 13510.498 - 13611.323: 0.8681% ( 1) 00:08:26.495 13611.323 - 13712.148: 1.0283% ( 12) 00:08:26.495 13712.148 - 13812.972: 1.1752% ( 11) 00:08:26.495 13812.972 - 13913.797: 1.3889% ( 16) 00:08:26.495 13913.797 - 14014.622: 1.8029% ( 31) 00:08:26.495 14014.622 - 14115.446: 2.4306% ( 47) 00:08:26.495 14115.446 - 14216.271: 2.9647% ( 40) 00:08:26.495 14216.271 - 14317.095: 3.5657% ( 45) 00:08:26.495 14317.095 - 14417.920: 4.3269% ( 57) 00:08:26.495 14417.920 - 14518.745: 5.2350% ( 68) 00:08:26.495 14518.745 - 14619.569: 6.4370% ( 90) 00:08:26.495 14619.569 - 14720.394: 7.6790% ( 93) 00:08:26.495 14720.394 - 14821.218: 9.3483% ( 125) 00:08:26.495 14821.218 - 14922.043: 11.0310% ( 126) 00:08:26.495 14922.043 - 15022.868: 12.8072% ( 133) 00:08:26.495 15022.868 - 15123.692: 14.8104% ( 150) 00:08:26.495 15123.692 - 15224.517: 17.0673% ( 169) 00:08:26.495 15224.517 - 15325.342: 19.5112% ( 183) 00:08:26.495 15325.342 - 15426.166: 22.1020% ( 194) 00:08:26.495 15426.166 - 15526.991: 24.6261% ( 189) 00:08:26.495 15526.991 - 15627.815: 26.7895% ( 162) 00:08:26.495 15627.815 - 15728.640: 29.0732% ( 171) 00:08:26.495 15728.640 - 15829.465: 31.5037% ( 182) 00:08:26.495 15829.465 - 15930.289: 34.4284% ( 219) 00:08:26.495 15930.289 - 16031.114: 36.7388% ( 173) 00:08:26.495 16031.114 - 16131.938: 38.9022% ( 162) 00:08:26.495 16131.938 - 16232.763: 40.9054% ( 150) 00:08:26.495 16232.763 - 16333.588: 42.7484% ( 138) 00:08:26.495 16333.588 - 16434.412: 44.6982% ( 146) 00:08:26.495 16434.412 - 16535.237: 46.3942% ( 127) 00:08:26.495 16535.237 - 16636.062: 48.1036% ( 128) 00:08:26.495 16636.062 - 16736.886: 49.6394% ( 115) 00:08:26.495 16736.886 - 16837.711: 51.2420% ( 120) 00:08:26.495 16837.711 - 16938.535: 52.9380% ( 127) 00:08:26.495 16938.535 - 17039.360: 54.7810% ( 138) 00:08:26.495 17039.360 - 17140.185: 56.6506% ( 140) 00:08:26.495 17140.185 - 17241.009: 58.4135% ( 132) 00:08:26.495 17241.009 - 17341.834: 60.2431% ( 137) 00:08:26.495 17341.834 - 17442.658: 62.2196% ( 148) 00:08:26.495 17442.658 - 17543.483: 63.9557% ( 130) 00:08:26.495 17543.483 - 17644.308: 65.7185% ( 132) 00:08:26.495 17644.308 - 17745.132: 67.4546% ( 130) 00:08:26.495 17745.132 - 17845.957: 69.2708% ( 136) 00:08:26.495 17845.957 - 17946.782: 71.0737% ( 135) 00:08:26.495 17946.782 - 18047.606: 72.8098% ( 130) 00:08:26.495 18047.606 - 18148.431: 74.5593% ( 131) 00:08:26.495 18148.431 - 18249.255: 75.9749% ( 106) 00:08:26.495 18249.255 - 18350.080: 77.3504% ( 103) 00:08:26.495 18350.080 - 18450.905: 78.9396% ( 119) 00:08:26.495 18450.905 - 18551.729: 80.1015% ( 87) 00:08:26.495 18551.729 - 18652.554: 81.2500% ( 86) 00:08:26.495 18652.554 - 18753.378: 82.2249% ( 73) 00:08:26.495 18753.378 - 18854.203: 83.0929% ( 65) 00:08:26.495 18854.203 - 18955.028: 84.0011% ( 68) 00:08:26.495 18955.028 - 19055.852: 84.9893% ( 74) 00:08:26.495 19055.852 - 19156.677: 85.7772% ( 59) 00:08:26.495 19156.677 - 19257.502: 86.6987% ( 69) 00:08:26.495 19257.502 - 19358.326: 87.4332% ( 55) 00:08:26.495 19358.326 - 19459.151: 87.8472% ( 31) 00:08:26.495 19459.151 - 19559.975: 88.3013% ( 34) 00:08:26.495 19559.975 - 19660.800: 88.8355% ( 40) 00:08:26.495 19660.800 - 19761.625: 89.4631% ( 47) 00:08:26.495 19761.625 - 19862.449: 90.0507% ( 44) 00:08:26.495 19862.449 - 19963.274: 90.8120% ( 57) 00:08:26.495 19963.274 - 20064.098: 91.5865% ( 58) 00:08:26.495 20064.098 - 20164.923: 92.2142% ( 47) 00:08:26.495 20164.923 - 20265.748: 92.7618% ( 41) 00:08:26.495 20265.748 - 20366.572: 93.3494% ( 44) 00:08:26.495 20366.572 - 20467.397: 93.9503% ( 45) 00:08:26.495 20467.397 - 20568.222: 94.5379% ( 44) 00:08:26.495 20568.222 - 20669.046: 95.0454% ( 38) 00:08:26.495 20669.046 - 20769.871: 95.5929% ( 41) 00:08:26.495 20769.871 - 20870.695: 96.1004% ( 38) 00:08:26.495 20870.695 - 20971.520: 96.4744% ( 28) 00:08:26.495 20971.520 - 21072.345: 96.7949% ( 24) 00:08:26.495 21072.345 - 21173.169: 96.9685% ( 13) 00:08:26.495 21173.169 - 21273.994: 97.1154% ( 11) 00:08:26.495 21273.994 - 21374.818: 97.2623% ( 11) 00:08:26.495 21374.818 - 21475.643: 97.3958% ( 10) 00:08:26.495 21475.643 - 21576.468: 97.4359% ( 3) 00:08:26.495 21878.942 - 21979.766: 97.4493% ( 1) 00:08:26.495 21979.766 - 22080.591: 97.5427% ( 7) 00:08:26.495 22080.591 - 22181.415: 97.6229% ( 6) 00:08:26.495 22181.415 - 22282.240: 97.6896% ( 5) 00:08:26.495 22282.240 - 22383.065: 97.7564% ( 5) 00:08:26.495 22383.065 - 22483.889: 97.8365% ( 6) 00:08:26.495 22483.889 - 22584.714: 97.9033% ( 5) 00:08:26.495 22584.714 - 22685.538: 97.9834% ( 6) 00:08:26.495 22685.538 - 22786.363: 98.0502% ( 5) 00:08:26.495 22786.363 - 22887.188: 98.0903% ( 3) 00:08:26.495 22887.188 - 22988.012: 98.1303% ( 3) 00:08:26.495 22988.012 - 23088.837: 98.1704% ( 3) 00:08:26.495 23088.837 - 23189.662: 98.2105% ( 3) 00:08:26.495 23189.662 - 23290.486: 98.2505% ( 3) 00:08:26.495 23290.486 - 23391.311: 98.2906% ( 3) 00:08:26.495 26819.348 - 27020.997: 98.3173% ( 2) 00:08:26.495 27020.997 - 27222.646: 98.4509% ( 10) 00:08:26.495 27222.646 - 27424.295: 98.5844% ( 10) 00:08:26.495 27424.295 - 27625.945: 98.7046% ( 9) 00:08:26.495 27625.945 - 27827.594: 98.8248% ( 9) 00:08:26.495 27827.594 - 28029.243: 98.9717% ( 11) 00:08:26.495 28029.243 - 28230.892: 99.1186% ( 11) 00:08:26.495 28230.892 - 28432.542: 99.1453% ( 2) 00:08:26.495 34482.018 - 34683.668: 99.1587% ( 1) 00:08:26.495 34683.668 - 34885.317: 99.3056% ( 11) 00:08:26.495 34885.317 - 35086.966: 99.4658% ( 12) 00:08:26.495 35086.966 - 35288.615: 99.6127% ( 11) 00:08:26.495 35288.615 - 35490.265: 99.7596% ( 11) 00:08:26.495 35490.265 - 35691.914: 99.9065% ( 11) 00:08:26.495 35691.914 - 35893.563: 100.0000% ( 7) 00:08:26.495 00:08:26.495 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:26.495 ============================================================================== 00:08:26.495 Range in us Cumulative IO count 00:08:26.495 7410.609 - 7461.022: 0.0534% ( 4) 00:08:26.495 7461.022 - 7511.434: 0.0801% ( 2) 00:08:26.495 7511.434 - 7561.846: 0.0935% ( 1) 00:08:26.495 7561.846 - 7612.258: 0.1603% ( 5) 00:08:26.495 7612.258 - 7662.671: 0.2003% ( 3) 00:08:26.495 7662.671 - 7713.083: 0.2137% ( 1) 00:08:26.495 7713.083 - 7763.495: 0.2671% ( 4) 00:08:26.495 7763.495 - 7813.908: 0.2938% ( 2) 00:08:26.495 7813.908 - 7864.320: 0.3205% ( 2) 00:08:26.495 7864.320 - 7914.732: 0.3606% ( 3) 00:08:26.495 7914.732 - 7965.145: 0.3873% ( 2) 00:08:26.495 7965.145 - 8015.557: 0.4407% ( 4) 00:08:26.495 8015.557 - 8065.969: 0.4941% ( 4) 00:08:26.495 8065.969 - 8116.382: 0.5342% ( 3) 00:08:26.495 8116.382 - 8166.794: 0.5743% ( 3) 00:08:26.495 8166.794 - 8217.206: 0.6277% ( 4) 00:08:26.495 8217.206 - 8267.618: 0.6677% ( 3) 00:08:26.495 8267.618 - 8318.031: 0.7078% ( 3) 00:08:26.495 8318.031 - 8368.443: 0.7479% ( 3) 00:08:26.495 8368.443 - 8418.855: 0.7879% ( 3) 00:08:26.495 8418.855 - 8469.268: 0.8280% ( 3) 00:08:26.495 8469.268 - 8519.680: 0.8413% ( 1) 00:08:26.495 8519.680 - 8570.092: 0.8547% ( 1) 00:08:26.495 13308.849 - 13409.674: 0.8814% ( 2) 00:08:26.495 13409.674 - 13510.498: 0.9348% ( 4) 00:08:26.495 13510.498 - 13611.323: 1.0550% ( 9) 00:08:26.495 13611.323 - 13712.148: 1.2420% ( 14) 00:08:26.495 13712.148 - 13812.972: 1.6693% ( 32) 00:08:26.495 13812.972 - 13913.797: 1.9765% ( 23) 00:08:26.495 13913.797 - 14014.622: 2.2970% ( 24) 00:08:26.496 14014.622 - 14115.446: 2.8446% ( 41) 00:08:26.496 14115.446 - 14216.271: 3.6458% ( 60) 00:08:26.496 14216.271 - 14317.095: 4.3803% ( 55) 00:08:26.496 14317.095 - 14417.920: 5.2217% ( 63) 00:08:26.496 14417.920 - 14518.745: 6.1966% ( 73) 00:08:26.496 14518.745 - 14619.569: 7.3451% ( 86) 00:08:26.496 14619.569 - 14720.394: 8.6806% ( 100) 00:08:26.496 14720.394 - 14821.218: 10.6704% ( 149) 00:08:26.496 14821.218 - 14922.043: 12.2863% ( 121) 00:08:26.496 14922.043 - 15022.868: 13.9022% ( 121) 00:08:26.496 15022.868 - 15123.692: 15.6918% ( 134) 00:08:26.496 15123.692 - 15224.517: 17.6683% ( 148) 00:08:26.496 15224.517 - 15325.342: 19.6715% ( 150) 00:08:26.496 15325.342 - 15426.166: 21.9551% ( 171) 00:08:26.496 15426.166 - 15526.991: 24.1052% ( 161) 00:08:26.496 15526.991 - 15627.815: 26.6159% ( 188) 00:08:26.496 15627.815 - 15728.640: 28.9263% ( 173) 00:08:26.496 15728.640 - 15829.465: 31.1031% ( 163) 00:08:26.496 15829.465 - 15930.289: 33.5470% ( 183) 00:08:26.496 15930.289 - 16031.114: 35.8173% ( 170) 00:08:26.496 16031.114 - 16131.938: 37.9541% ( 160) 00:08:26.496 16131.938 - 16232.763: 40.3446% ( 179) 00:08:26.496 16232.763 - 16333.588: 42.4947% ( 161) 00:08:26.496 16333.588 - 16434.412: 44.5913% ( 157) 00:08:26.496 16434.412 - 16535.237: 46.3007% ( 128) 00:08:26.496 16535.237 - 16636.062: 48.3707% ( 155) 00:08:26.496 16636.062 - 16736.886: 50.2938% ( 144) 00:08:26.496 16736.886 - 16837.711: 52.1501% ( 139) 00:08:26.496 16837.711 - 16938.535: 54.1132% ( 147) 00:08:26.496 16938.535 - 17039.360: 55.8494% ( 130) 00:08:26.496 17039.360 - 17140.185: 57.6656% ( 136) 00:08:26.496 17140.185 - 17241.009: 59.3483% ( 126) 00:08:26.496 17241.009 - 17341.834: 61.2847% ( 145) 00:08:26.496 17341.834 - 17442.658: 62.7671% ( 111) 00:08:26.496 17442.658 - 17543.483: 64.4498% ( 126) 00:08:26.496 17543.483 - 17644.308: 66.1325% ( 126) 00:08:26.496 17644.308 - 17745.132: 67.7751% ( 123) 00:08:26.496 17745.132 - 17845.957: 69.4712% ( 127) 00:08:26.496 17845.957 - 17946.782: 71.2340% ( 132) 00:08:26.496 17946.782 - 18047.606: 72.9167% ( 126) 00:08:26.496 18047.606 - 18148.431: 74.6795% ( 132) 00:08:26.496 18148.431 - 18249.255: 76.0417% ( 102) 00:08:26.496 18249.255 - 18350.080: 77.4172% ( 103) 00:08:26.496 18350.080 - 18450.905: 78.7527% ( 100) 00:08:26.496 18450.905 - 18551.729: 79.9145% ( 87) 00:08:26.496 18551.729 - 18652.554: 80.9161% ( 75) 00:08:26.496 18652.554 - 18753.378: 81.6640% ( 56) 00:08:26.496 18753.378 - 18854.203: 82.6656% ( 75) 00:08:26.496 18854.203 - 18955.028: 83.3200% ( 49) 00:08:26.496 18955.028 - 19055.852: 84.1079% ( 59) 00:08:26.496 19055.852 - 19156.677: 84.9493% ( 63) 00:08:26.496 19156.677 - 19257.502: 85.7772% ( 62) 00:08:26.496 19257.502 - 19358.326: 86.5385% ( 57) 00:08:26.496 19358.326 - 19459.151: 87.2730% ( 55) 00:08:26.496 19459.151 - 19559.975: 87.8873% ( 46) 00:08:26.496 19559.975 - 19660.800: 88.7286% ( 63) 00:08:26.496 19660.800 - 19761.625: 89.2094% ( 36) 00:08:26.496 19761.625 - 19862.449: 90.0240% ( 61) 00:08:26.496 19862.449 - 19963.274: 90.6384% ( 46) 00:08:26.496 19963.274 - 20064.098: 91.2927% ( 49) 00:08:26.496 20064.098 - 20164.923: 92.0940% ( 60) 00:08:26.496 20164.923 - 20265.748: 92.7350% ( 48) 00:08:26.496 20265.748 - 20366.572: 93.5230% ( 59) 00:08:26.496 20366.572 - 20467.397: 93.8435% ( 24) 00:08:26.496 20467.397 - 20568.222: 94.1774% ( 25) 00:08:26.496 20568.222 - 20669.046: 94.5646% ( 29) 00:08:26.496 20669.046 - 20769.871: 94.9653% ( 30) 00:08:26.496 20769.871 - 20870.695: 95.2057% ( 18) 00:08:26.496 20870.695 - 20971.520: 95.5662% ( 27) 00:08:26.496 20971.520 - 21072.345: 95.8467% ( 21) 00:08:26.496 21072.345 - 21173.169: 96.1405% ( 22) 00:08:26.496 21173.169 - 21273.994: 96.2874% ( 11) 00:08:26.496 21273.994 - 21374.818: 96.6480% ( 27) 00:08:26.496 21374.818 - 21475.643: 96.7949% ( 11) 00:08:26.496 21475.643 - 21576.468: 97.0353% ( 18) 00:08:26.496 21576.468 - 21677.292: 97.2089% ( 13) 00:08:26.496 21677.292 - 21778.117: 97.3424% ( 10) 00:08:26.496 21778.117 - 21878.942: 97.6362% ( 22) 00:08:26.496 21878.942 - 21979.766: 97.6763% ( 3) 00:08:26.496 21979.766 - 22080.591: 97.7698% ( 7) 00:08:26.496 22080.591 - 22181.415: 97.8365% ( 5) 00:08:26.496 22181.415 - 22282.240: 97.8766% ( 3) 00:08:26.496 22282.240 - 22383.065: 97.9834% ( 8) 00:08:26.496 22383.065 - 22483.889: 98.0369% ( 4) 00:08:26.496 22483.889 - 22584.714: 98.0636% ( 2) 00:08:26.496 22584.714 - 22685.538: 98.1303% ( 5) 00:08:26.496 22685.538 - 22786.363: 98.1971% ( 5) 00:08:26.496 22786.363 - 22887.188: 98.2639% ( 5) 00:08:26.496 22887.188 - 22988.012: 98.2906% ( 2) 00:08:26.496 26416.049 - 26617.698: 98.3040% ( 1) 00:08:26.496 26617.698 - 26819.348: 98.3974% ( 7) 00:08:26.496 26819.348 - 27020.997: 98.5443% ( 11) 00:08:26.496 27020.997 - 27222.646: 98.6512% ( 8) 00:08:26.496 27222.646 - 27424.295: 98.7847% ( 10) 00:08:26.496 27424.295 - 27625.945: 98.9183% ( 10) 00:08:26.496 27625.945 - 27827.594: 99.0652% ( 11) 00:08:26.496 27827.594 - 28029.243: 99.1453% ( 6) 00:08:26.496 34078.720 - 34280.369: 99.1587% ( 1) 00:08:26.496 34280.369 - 34482.018: 99.3056% ( 11) 00:08:26.496 34482.018 - 34683.668: 99.3990% ( 7) 00:08:26.496 34683.668 - 34885.317: 99.5192% ( 9) 00:08:26.496 34885.317 - 35086.966: 99.6528% ( 10) 00:08:26.496 35086.966 - 35288.615: 99.7730% ( 9) 00:08:26.496 35288.615 - 35490.265: 99.8932% ( 9) 00:08:26.496 35490.265 - 35691.914: 100.0000% ( 8) 00:08:26.496 00:08:26.496 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:26.496 ============================================================================== 00:08:26.496 Range in us Cumulative IO count 00:08:26.496 7057.723 - 7108.135: 0.0401% ( 3) 00:08:26.496 7108.135 - 7158.548: 0.0935% ( 4) 00:08:26.496 7158.548 - 7208.960: 0.1335% ( 3) 00:08:26.496 7208.960 - 7259.372: 0.1736% ( 3) 00:08:26.496 7259.372 - 7309.785: 0.2270% ( 4) 00:08:26.496 7309.785 - 7360.197: 0.2804% ( 4) 00:08:26.496 7360.197 - 7410.609: 0.3339% ( 4) 00:08:26.496 7410.609 - 7461.022: 0.3739% ( 3) 00:08:26.496 7461.022 - 7511.434: 0.4274% ( 4) 00:08:26.496 7511.434 - 7561.846: 0.4808% ( 4) 00:08:26.496 7561.846 - 7612.258: 0.5342% ( 4) 00:08:26.496 7612.258 - 7662.671: 0.5876% ( 4) 00:08:26.496 7662.671 - 7713.083: 0.6277% ( 3) 00:08:26.496 7713.083 - 7763.495: 0.6811% ( 4) 00:08:26.496 7763.495 - 7813.908: 0.7212% ( 3) 00:08:26.496 7813.908 - 7864.320: 0.7746% ( 4) 00:08:26.496 7864.320 - 7914.732: 0.8280% ( 4) 00:08:26.496 7914.732 - 7965.145: 0.8547% ( 2) 00:08:26.496 13611.323 - 13712.148: 0.9482% ( 7) 00:08:26.496 13712.148 - 13812.972: 1.0951% ( 11) 00:08:26.496 13812.972 - 13913.797: 1.3622% ( 20) 00:08:26.496 13913.797 - 14014.622: 1.7094% ( 26) 00:08:26.496 14014.622 - 14115.446: 2.1234% ( 31) 00:08:26.496 14115.446 - 14216.271: 2.5107% ( 29) 00:08:26.496 14216.271 - 14317.095: 2.9915% ( 36) 00:08:26.496 14317.095 - 14417.920: 3.7660% ( 58) 00:08:26.496 14417.920 - 14518.745: 4.9546% ( 89) 00:08:26.496 14518.745 - 14619.569: 6.2366% ( 96) 00:08:26.496 14619.569 - 14720.394: 7.9460% ( 128) 00:08:26.496 14720.394 - 14821.218: 10.0694% ( 159) 00:08:26.496 14821.218 - 14922.043: 12.0860% ( 151) 00:08:26.496 14922.043 - 15022.868: 14.2628% ( 163) 00:08:26.496 15022.868 - 15123.692: 16.3862% ( 159) 00:08:26.496 15123.692 - 15224.517: 18.6832% ( 172) 00:08:26.496 15224.517 - 15325.342: 20.9802% ( 172) 00:08:26.496 15325.342 - 15426.166: 23.2639% ( 171) 00:08:26.496 15426.166 - 15526.991: 25.5743% ( 173) 00:08:26.496 15526.991 - 15627.815: 27.8846% ( 173) 00:08:26.496 15627.815 - 15728.640: 30.3419% ( 184) 00:08:26.496 15728.640 - 15829.465: 32.6522% ( 173) 00:08:26.496 15829.465 - 15930.289: 35.0427% ( 179) 00:08:26.496 15930.289 - 16031.114: 37.2730% ( 167) 00:08:26.496 16031.114 - 16131.938: 39.3162% ( 153) 00:08:26.496 16131.938 - 16232.763: 41.3729% ( 154) 00:08:26.496 16232.763 - 16333.588: 43.4161% ( 153) 00:08:26.496 16333.588 - 16434.412: 45.6063% ( 164) 00:08:26.496 16434.412 - 16535.237: 47.8232% ( 166) 00:08:26.496 16535.237 - 16636.062: 49.8264% ( 150) 00:08:26.496 16636.062 - 16736.886: 51.7495% ( 144) 00:08:26.497 16736.886 - 16837.711: 53.5924% ( 138) 00:08:26.497 16837.711 - 16938.535: 54.9012% ( 98) 00:08:26.497 16938.535 - 17039.360: 56.4503% ( 116) 00:08:26.497 17039.360 - 17140.185: 57.9460% ( 112) 00:08:26.497 17140.185 - 17241.009: 59.5486% ( 120) 00:08:26.497 17241.009 - 17341.834: 61.1378% ( 119) 00:08:26.497 17341.834 - 17442.658: 62.6469% ( 113) 00:08:26.497 17442.658 - 17543.483: 63.8755% ( 92) 00:08:26.497 17543.483 - 17644.308: 65.2644% ( 104) 00:08:26.497 17644.308 - 17745.132: 66.6533% ( 104) 00:08:26.497 17745.132 - 17845.957: 68.1757% ( 114) 00:08:26.497 17845.957 - 17946.782: 69.7917% ( 121) 00:08:26.497 17946.782 - 18047.606: 71.4076% ( 121) 00:08:26.497 18047.606 - 18148.431: 73.0636% ( 124) 00:08:26.497 18148.431 - 18249.255: 74.5860% ( 114) 00:08:26.497 18249.255 - 18350.080: 76.0016% ( 106) 00:08:26.497 18350.080 - 18450.905: 77.4306% ( 107) 00:08:26.497 18450.905 - 18551.729: 78.7393% ( 98) 00:08:26.497 18551.729 - 18652.554: 80.1282% ( 104) 00:08:26.497 18652.554 - 18753.378: 81.3168% ( 89) 00:08:26.497 18753.378 - 18854.203: 82.4653% ( 86) 00:08:26.497 18854.203 - 18955.028: 83.3467% ( 66) 00:08:26.497 18955.028 - 19055.852: 84.2147% ( 65) 00:08:26.497 19055.852 - 19156.677: 85.0561% ( 63) 00:08:26.497 19156.677 - 19257.502: 86.0176% ( 72) 00:08:26.497 19257.502 - 19358.326: 86.8189% ( 60) 00:08:26.497 19358.326 - 19459.151: 87.5534% ( 55) 00:08:26.497 19459.151 - 19559.975: 88.2078% ( 49) 00:08:26.497 19559.975 - 19660.800: 89.0625% ( 64) 00:08:26.497 19660.800 - 19761.625: 89.6635% ( 45) 00:08:26.497 19761.625 - 19862.449: 90.2644% ( 45) 00:08:26.497 19862.449 - 19963.274: 90.9054% ( 48) 00:08:26.497 19963.274 - 20064.098: 91.5331% ( 47) 00:08:26.497 20064.098 - 20164.923: 92.1608% ( 47) 00:08:26.497 20164.923 - 20265.748: 92.8953% ( 55) 00:08:26.497 20265.748 - 20366.572: 93.5230% ( 47) 00:08:26.497 20366.572 - 20467.397: 93.9770% ( 34) 00:08:26.497 20467.397 - 20568.222: 94.4845% ( 38) 00:08:26.497 20568.222 - 20669.046: 94.9119% ( 32) 00:08:26.497 20669.046 - 20769.871: 95.2991% ( 29) 00:08:26.497 20769.871 - 20870.695: 95.5262% ( 17) 00:08:26.497 20870.695 - 20971.520: 95.7799% ( 19) 00:08:26.497 20971.520 - 21072.345: 96.0337% ( 19) 00:08:26.497 21072.345 - 21173.169: 96.3141% ( 21) 00:08:26.497 21173.169 - 21273.994: 96.5278% ( 16) 00:08:26.497 21273.994 - 21374.818: 96.7415% ( 16) 00:08:26.497 21374.818 - 21475.643: 96.9952% ( 19) 00:08:26.497 21475.643 - 21576.468: 97.2089% ( 16) 00:08:26.497 21576.468 - 21677.292: 97.3691% ( 12) 00:08:26.497 21677.292 - 21778.117: 97.4893% ( 9) 00:08:26.497 21778.117 - 21878.942: 97.6496% ( 12) 00:08:26.497 21878.942 - 21979.766: 97.8232% ( 13) 00:08:26.497 21979.766 - 22080.591: 97.9567% ( 10) 00:08:26.497 22080.591 - 22181.415: 98.0903% ( 10) 00:08:26.497 22181.415 - 22282.240: 98.1704% ( 6) 00:08:26.497 22282.240 - 22383.065: 98.2505% ( 6) 00:08:26.497 22383.065 - 22483.889: 98.2906% ( 3) 00:08:26.497 26012.751 - 26214.400: 98.3173% ( 2) 00:08:26.497 26214.400 - 26416.049: 98.4375% ( 9) 00:08:26.497 26416.049 - 26617.698: 98.5577% ( 9) 00:08:26.497 26617.698 - 26819.348: 98.6912% ( 10) 00:08:26.497 26819.348 - 27020.997: 98.8248% ( 10) 00:08:26.497 27020.997 - 27222.646: 98.9717% ( 11) 00:08:26.497 27222.646 - 27424.295: 99.1186% ( 11) 00:08:26.497 27424.295 - 27625.945: 99.1453% ( 2) 00:08:26.497 33675.422 - 33877.071: 99.1854% ( 3) 00:08:26.497 33877.071 - 34078.720: 99.2655% ( 6) 00:08:26.497 34078.720 - 34280.369: 99.3723% ( 8) 00:08:26.497 34280.369 - 34482.018: 99.4658% ( 7) 00:08:26.497 34482.018 - 34683.668: 99.6127% ( 11) 00:08:26.497 34683.668 - 34885.317: 99.7463% ( 10) 00:08:26.497 34885.317 - 35086.966: 99.8932% ( 11) 00:08:26.497 35086.966 - 35288.615: 100.0000% ( 8) 00:08:26.497 00:08:26.497 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:26.497 ============================================================================== 00:08:26.497 Range in us Cumulative IO count 00:08:26.497 5116.849 - 5142.055: 0.0534% ( 4) 00:08:26.497 5142.055 - 5167.262: 0.0668% ( 1) 00:08:26.497 5192.468 - 5217.674: 0.1202% ( 4) 00:08:26.497 5217.674 - 5242.880: 0.1603% ( 3) 00:08:26.497 5242.880 - 5268.086: 0.2003% ( 3) 00:08:26.497 5293.292 - 5318.498: 0.2404% ( 3) 00:08:26.497 5318.498 - 5343.705: 0.2537% ( 1) 00:08:26.497 5343.705 - 5368.911: 0.2804% ( 2) 00:08:26.497 5368.911 - 5394.117: 0.3072% ( 2) 00:08:26.497 5394.117 - 5419.323: 0.3339% ( 2) 00:08:26.497 5419.323 - 5444.529: 0.3606% ( 2) 00:08:26.497 5444.529 - 5469.735: 0.3873% ( 2) 00:08:26.497 5469.735 - 5494.942: 0.4006% ( 1) 00:08:26.497 5494.942 - 5520.148: 0.4274% ( 2) 00:08:26.497 5520.148 - 5545.354: 0.4541% ( 2) 00:08:26.497 5545.354 - 5570.560: 0.4674% ( 1) 00:08:26.497 5570.560 - 5595.766: 0.4808% ( 1) 00:08:26.497 5595.766 - 5620.972: 0.5075% ( 2) 00:08:26.497 5646.178 - 5671.385: 0.5208% ( 1) 00:08:26.497 5671.385 - 5696.591: 0.5342% ( 1) 00:08:26.497 5696.591 - 5721.797: 0.5475% ( 1) 00:08:26.497 5721.797 - 5747.003: 0.5609% ( 1) 00:08:26.497 5747.003 - 5772.209: 0.5743% ( 1) 00:08:26.497 5772.209 - 5797.415: 0.5876% ( 1) 00:08:26.497 5797.415 - 5822.622: 0.6010% ( 1) 00:08:26.497 5822.622 - 5847.828: 0.6143% ( 1) 00:08:26.497 5847.828 - 5873.034: 0.6277% ( 1) 00:08:26.497 5873.034 - 5898.240: 0.6410% ( 1) 00:08:26.497 5898.240 - 5923.446: 0.6544% ( 1) 00:08:26.497 5923.446 - 5948.652: 0.6677% ( 1) 00:08:26.497 5948.652 - 5973.858: 0.6811% ( 1) 00:08:26.497 5973.858 - 5999.065: 0.7078% ( 2) 00:08:26.497 5999.065 - 6024.271: 0.7212% ( 1) 00:08:26.497 6024.271 - 6049.477: 0.7345% ( 1) 00:08:26.497 6049.477 - 6074.683: 0.7479% ( 1) 00:08:26.497 6074.683 - 6099.889: 0.7612% ( 1) 00:08:26.497 6099.889 - 6125.095: 0.7746% ( 1) 00:08:26.497 6125.095 - 6150.302: 0.7879% ( 1) 00:08:26.497 6150.302 - 6175.508: 0.8013% ( 1) 00:08:26.497 6175.508 - 6200.714: 0.8146% ( 1) 00:08:26.497 6200.714 - 6225.920: 0.8280% ( 1) 00:08:26.497 6225.920 - 6251.126: 0.8413% ( 1) 00:08:26.497 6251.126 - 6276.332: 0.8547% ( 1) 00:08:26.497 12048.542 - 12098.954: 1.0016% ( 11) 00:08:26.497 12098.954 - 12149.366: 1.0817% ( 6) 00:08:26.497 12149.366 - 12199.778: 1.1351% ( 4) 00:08:26.497 12199.778 - 12250.191: 1.1752% ( 3) 00:08:26.497 12250.191 - 12300.603: 1.2286% ( 4) 00:08:26.497 12300.603 - 12351.015: 1.2687% ( 3) 00:08:26.497 12351.015 - 12401.428: 1.3221% ( 4) 00:08:26.497 12401.428 - 12451.840: 1.3755% ( 4) 00:08:26.497 12451.840 - 12502.252: 1.4290% ( 4) 00:08:26.497 12502.252 - 12552.665: 1.4690% ( 3) 00:08:26.497 12552.665 - 12603.077: 1.4957% ( 2) 00:08:26.497 12603.077 - 12653.489: 1.5358% ( 3) 00:08:26.497 12653.489 - 12703.902: 1.5625% ( 2) 00:08:26.497 12703.902 - 12754.314: 1.6026% ( 3) 00:08:26.497 12754.314 - 12804.726: 1.6426% ( 3) 00:08:26.497 12804.726 - 12855.138: 1.6693% ( 2) 00:08:26.497 12855.138 - 12905.551: 1.6960% ( 2) 00:08:26.497 12905.551 - 13006.375: 1.7094% ( 1) 00:08:26.497 13510.498 - 13611.323: 1.7228% ( 1) 00:08:26.497 13611.323 - 13712.148: 1.9498% ( 17) 00:08:26.497 13712.148 - 13812.972: 2.1368% ( 14) 00:08:26.497 13812.972 - 13913.797: 2.4172% ( 21) 00:08:26.497 13913.797 - 14014.622: 2.7377% ( 24) 00:08:26.497 14014.622 - 14115.446: 3.0582% ( 24) 00:08:26.497 14115.446 - 14216.271: 3.5657% ( 38) 00:08:26.497 14216.271 - 14317.095: 4.2067% ( 48) 00:08:26.497 14317.095 - 14417.920: 4.9813% ( 58) 00:08:26.497 14417.920 - 14518.745: 6.0764% ( 82) 00:08:26.497 14518.745 - 14619.569: 7.5321% ( 109) 00:08:26.497 14619.569 - 14720.394: 8.9610% ( 107) 00:08:26.497 14720.394 - 14821.218: 10.5101% ( 116) 00:08:26.497 14821.218 - 14922.043: 12.2196% ( 128) 00:08:26.497 14922.043 - 15022.868: 14.1026% ( 141) 00:08:26.497 15022.868 - 15123.692: 16.1725% ( 155) 00:08:26.497 15123.692 - 15224.517: 18.2692% ( 157) 00:08:26.497 15224.517 - 15325.342: 20.4728% ( 165) 00:08:26.497 15325.342 - 15426.166: 22.5427% ( 155) 00:08:26.497 15426.166 - 15526.991: 24.6261% ( 156) 00:08:26.497 15526.991 - 15627.815: 27.0566% ( 182) 00:08:26.498 15627.815 - 15728.640: 29.5272% ( 185) 00:08:26.498 15728.640 - 15829.465: 31.7441% ( 166) 00:08:26.498 15829.465 - 15930.289: 34.0946% ( 176) 00:08:26.498 15930.289 - 16031.114: 36.4049% ( 173) 00:08:26.498 16031.114 - 16131.938: 38.5951% ( 164) 00:08:26.498 16131.938 - 16232.763: 40.7318% ( 160) 00:08:26.498 16232.763 - 16333.588: 42.8152% ( 156) 00:08:26.498 16333.588 - 16434.412: 44.8584% ( 153) 00:08:26.498 16434.412 - 16535.237: 46.8349% ( 148) 00:08:26.498 16535.237 - 16636.062: 48.9183% ( 156) 00:08:26.498 16636.062 - 16736.886: 51.1351% ( 166) 00:08:26.498 16736.886 - 16837.711: 53.2185% ( 156) 00:08:26.498 16837.711 - 16938.535: 55.0214% ( 135) 00:08:26.498 16938.535 - 17039.360: 56.7842% ( 132) 00:08:26.498 17039.360 - 17140.185: 58.5871% ( 135) 00:08:26.498 17140.185 - 17241.009: 60.1362% ( 116) 00:08:26.498 17241.009 - 17341.834: 61.6587% ( 114) 00:08:26.498 17341.834 - 17442.658: 63.2212% ( 117) 00:08:26.498 17442.658 - 17543.483: 64.8504% ( 122) 00:08:26.498 17543.483 - 17644.308: 66.2126% ( 102) 00:08:26.498 17644.308 - 17745.132: 67.5881% ( 103) 00:08:26.498 17745.132 - 17845.957: 69.1239% ( 115) 00:08:26.498 17845.957 - 17946.782: 70.6597% ( 115) 00:08:26.498 17946.782 - 18047.606: 72.1822% ( 114) 00:08:26.498 18047.606 - 18148.431: 73.7981% ( 121) 00:08:26.498 18148.431 - 18249.255: 75.3072% ( 113) 00:08:26.498 18249.255 - 18350.080: 76.5892% ( 96) 00:08:26.498 18350.080 - 18450.905: 77.8846% ( 97) 00:08:26.498 18450.905 - 18551.729: 79.2468% ( 102) 00:08:26.498 18551.729 - 18652.554: 80.5021% ( 94) 00:08:26.498 18652.554 - 18753.378: 81.6506% ( 86) 00:08:26.498 18753.378 - 18854.203: 82.6522% ( 75) 00:08:26.498 18854.203 - 18955.028: 83.5604% ( 68) 00:08:26.498 18955.028 - 19055.852: 84.3483% ( 59) 00:08:26.498 19055.852 - 19156.677: 85.0561% ( 53) 00:08:26.498 19156.677 - 19257.502: 85.6971% ( 48) 00:08:26.498 19257.502 - 19358.326: 86.3649% ( 50) 00:08:26.498 19358.326 - 19459.151: 86.9525% ( 44) 00:08:26.498 19459.151 - 19559.975: 87.6870% ( 55) 00:08:26.498 19559.975 - 19660.800: 88.3681% ( 51) 00:08:26.498 19660.800 - 19761.625: 89.1960% ( 62) 00:08:26.498 19761.625 - 19862.449: 89.9840% ( 59) 00:08:26.498 19862.449 - 19963.274: 90.8120% ( 62) 00:08:26.498 19963.274 - 20064.098: 91.4931% ( 51) 00:08:26.498 20064.098 - 20164.923: 92.0540% ( 42) 00:08:26.498 20164.923 - 20265.748: 92.6149% ( 42) 00:08:26.498 20265.748 - 20366.572: 93.2692% ( 49) 00:08:26.498 20366.572 - 20467.397: 93.9370% ( 50) 00:08:26.498 20467.397 - 20568.222: 94.5379% ( 45) 00:08:26.498 20568.222 - 20669.046: 95.0454% ( 38) 00:08:26.498 20669.046 - 20769.871: 95.5128% ( 35) 00:08:26.498 20769.871 - 20870.695: 95.9402% ( 32) 00:08:26.498 20870.695 - 20971.520: 96.2607% ( 24) 00:08:26.498 20971.520 - 21072.345: 96.5946% ( 25) 00:08:26.498 21072.345 - 21173.169: 96.8884% ( 22) 00:08:26.498 21173.169 - 21273.994: 97.1154% ( 17) 00:08:26.498 21273.994 - 21374.818: 97.2756% ( 12) 00:08:26.498 21374.818 - 21475.643: 97.3825% ( 8) 00:08:26.498 21475.643 - 21576.468: 97.4359% ( 4) 00:08:26.498 22282.240 - 22383.065: 97.4626% ( 2) 00:08:26.498 22383.065 - 22483.889: 97.5160% ( 4) 00:08:26.498 22483.889 - 22584.714: 97.5828% ( 5) 00:08:26.498 22584.714 - 22685.538: 97.6629% ( 6) 00:08:26.498 22685.538 - 22786.363: 97.7431% ( 6) 00:08:26.498 22786.363 - 22887.188: 97.8098% ( 5) 00:08:26.498 22887.188 - 22988.012: 97.8900% ( 6) 00:08:26.498 22988.012 - 23088.837: 97.9701% ( 6) 00:08:26.498 23088.837 - 23189.662: 98.0369% ( 5) 00:08:26.498 23189.662 - 23290.486: 98.1170% ( 6) 00:08:26.498 23290.486 - 23391.311: 98.1704% ( 4) 00:08:26.498 23391.311 - 23492.135: 98.2372% ( 5) 00:08:26.498 23492.135 - 23592.960: 98.2906% ( 4) 00:08:26.498 26617.698 - 26819.348: 98.3040% ( 1) 00:08:26.498 26819.348 - 27020.997: 98.3707% ( 5) 00:08:26.498 27020.997 - 27222.646: 98.4776% ( 8) 00:08:26.498 27222.646 - 27424.295: 98.6111% ( 10) 00:08:26.498 27424.295 - 27625.945: 98.7580% ( 11) 00:08:26.498 27625.945 - 27827.594: 98.8916% ( 10) 00:08:26.498 27827.594 - 28029.243: 99.0118% ( 9) 00:08:26.498 28029.243 - 28230.892: 99.1453% ( 10) 00:08:26.498 34482.018 - 34683.668: 99.1720% ( 2) 00:08:26.498 34683.668 - 34885.317: 99.2655% ( 7) 00:08:26.498 34885.317 - 35086.966: 99.3990% ( 10) 00:08:26.498 35086.966 - 35288.615: 99.5459% ( 11) 00:08:26.498 35288.615 - 35490.265: 99.6795% ( 10) 00:08:26.498 35490.265 - 35691.914: 99.8264% ( 11) 00:08:26.498 35691.914 - 35893.563: 99.9733% ( 11) 00:08:26.498 35893.563 - 36095.212: 100.0000% ( 2) 00:08:26.498 00:08:26.498 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:26.498 ============================================================================== 00:08:26.498 Range in us Cumulative IO count 00:08:26.498 4663.138 - 4688.345: 0.0132% ( 1) 00:08:26.498 4688.345 - 4713.551: 0.0397% ( 2) 00:08:26.498 4713.551 - 4738.757: 0.0662% ( 2) 00:08:26.498 4738.757 - 4763.963: 0.0927% ( 2) 00:08:26.498 4763.963 - 4789.169: 0.1192% ( 2) 00:08:26.498 4789.169 - 4814.375: 0.1589% ( 3) 00:08:26.498 4814.375 - 4839.582: 0.1854% ( 2) 00:08:26.498 4839.582 - 4864.788: 0.2119% ( 2) 00:08:26.498 4864.788 - 4889.994: 0.2383% ( 2) 00:08:26.498 4889.994 - 4915.200: 0.2516% ( 1) 00:08:26.498 4915.200 - 4940.406: 0.2781% ( 2) 00:08:26.498 4940.406 - 4965.612: 0.3046% ( 2) 00:08:26.498 4965.612 - 4990.818: 0.3310% ( 2) 00:08:26.498 4990.818 - 5016.025: 0.3575% ( 2) 00:08:26.498 5016.025 - 5041.231: 0.3840% ( 2) 00:08:26.498 5041.231 - 5066.437: 0.4105% ( 2) 00:08:26.498 5066.437 - 5091.643: 0.4370% ( 2) 00:08:26.498 5091.643 - 5116.849: 0.4502% ( 1) 00:08:26.498 5116.849 - 5142.055: 0.4635% ( 1) 00:08:26.498 5142.055 - 5167.262: 0.4899% ( 2) 00:08:26.498 5167.262 - 5192.468: 0.5032% ( 1) 00:08:26.498 5192.468 - 5217.674: 0.5297% ( 2) 00:08:26.498 5217.674 - 5242.880: 0.5429% ( 1) 00:08:26.498 5242.880 - 5268.086: 0.5694% ( 2) 00:08:26.498 5268.086 - 5293.292: 0.5959% ( 2) 00:08:26.498 5293.292 - 5318.498: 0.6224% ( 2) 00:08:26.498 5318.498 - 5343.705: 0.6488% ( 2) 00:08:26.498 5343.705 - 5368.911: 0.6621% ( 1) 00:08:26.498 5368.911 - 5394.117: 0.6886% ( 2) 00:08:26.498 5394.117 - 5419.323: 0.7150% ( 2) 00:08:26.498 5419.323 - 5444.529: 0.7415% ( 2) 00:08:26.498 5444.529 - 5469.735: 0.7680% ( 2) 00:08:26.498 5469.735 - 5494.942: 0.7945% ( 2) 00:08:26.498 5494.942 - 5520.148: 0.8210% ( 2) 00:08:26.498 5520.148 - 5545.354: 0.8475% ( 2) 00:08:26.498 10536.172 - 10586.585: 0.8739% ( 2) 00:08:26.498 10586.585 - 10636.997: 0.9799% ( 8) 00:08:26.498 10636.997 - 10687.409: 1.0196% ( 3) 00:08:26.498 10687.409 - 10737.822: 1.0726% ( 4) 00:08:26.498 10737.822 - 10788.234: 1.1123% ( 3) 00:08:26.498 10788.234 - 10838.646: 1.1653% ( 4) 00:08:26.498 10838.646 - 10889.058: 1.2182% ( 4) 00:08:26.498 10889.058 - 10939.471: 1.2579% ( 3) 00:08:26.498 10939.471 - 10989.883: 1.3109% ( 4) 00:08:26.498 10989.883 - 11040.295: 1.3639% ( 4) 00:08:26.498 11040.295 - 11090.708: 1.4168% ( 4) 00:08:26.498 11090.708 - 11141.120: 1.4566% ( 3) 00:08:26.498 11141.120 - 11191.532: 1.5095% ( 4) 00:08:26.498 11191.532 - 11241.945: 1.5493% ( 3) 00:08:26.498 11241.945 - 11292.357: 1.6022% ( 4) 00:08:26.498 11292.357 - 11342.769: 1.6552% ( 4) 00:08:26.498 11342.769 - 11393.182: 1.6949% ( 3) 00:08:26.498 13611.323 - 13712.148: 1.7214% ( 2) 00:08:26.498 13712.148 - 13812.972: 1.8671% ( 11) 00:08:26.498 13812.972 - 13913.797: 2.0524% ( 14) 00:08:26.498 13913.797 - 14014.622: 2.2908% ( 18) 00:08:26.498 14014.622 - 14115.446: 2.5821% ( 22) 00:08:26.498 14115.446 - 14216.271: 3.0191% ( 33) 00:08:26.498 14216.271 - 14317.095: 3.6149% ( 45) 00:08:26.498 14317.095 - 14417.920: 4.4624% ( 64) 00:08:26.498 14417.920 - 14518.745: 5.6541% ( 90) 00:08:26.499 14518.745 - 14619.569: 6.8988% ( 94) 00:08:26.499 14619.569 - 14720.394: 7.9714% ( 81) 00:08:26.499 14720.394 - 14821.218: 9.6266% ( 125) 00:08:26.499 14821.218 - 14922.043: 11.5996% ( 149) 00:08:26.499 14922.043 - 15022.868: 13.7182% ( 160) 00:08:26.499 15022.868 - 15123.692: 15.7044% ( 150) 00:08:26.499 15123.692 - 15224.517: 17.9290% ( 168) 00:08:26.499 15224.517 - 15325.342: 20.4184% ( 188) 00:08:26.499 15325.342 - 15426.166: 23.0667% ( 200) 00:08:26.499 15426.166 - 15526.991: 25.6886% ( 198) 00:08:26.499 15526.991 - 15627.815: 28.3104% ( 198) 00:08:26.499 15627.815 - 15728.640: 30.8925% ( 195) 00:08:26.499 15728.640 - 15829.465: 33.5011% ( 197) 00:08:26.499 15829.465 - 15930.289: 36.0169% ( 190) 00:08:26.499 15930.289 - 16031.114: 38.2945% ( 172) 00:08:26.499 16031.114 - 16131.938: 40.4926% ( 166) 00:08:26.499 16131.938 - 16232.763: 42.6245% ( 161) 00:08:26.499 16232.763 - 16333.588: 44.9417% ( 175) 00:08:26.499 16333.588 - 16434.412: 47.0074% ( 156) 00:08:26.499 16434.412 - 16535.237: 48.9274% ( 145) 00:08:26.499 16535.237 - 16636.062: 50.7945% ( 141) 00:08:26.499 16636.062 - 16736.886: 52.5689% ( 134) 00:08:26.499 16736.886 - 16837.711: 54.0651% ( 113) 00:08:26.499 16837.711 - 16938.535: 55.5747% ( 114) 00:08:26.499 16938.535 - 17039.360: 57.4417% ( 141) 00:08:26.499 17039.360 - 17140.185: 59.2691% ( 138) 00:08:26.499 17140.185 - 17241.009: 60.9375% ( 126) 00:08:26.499 17241.009 - 17341.834: 62.2484% ( 99) 00:08:26.499 17341.834 - 17442.658: 63.7579% ( 114) 00:08:26.499 17442.658 - 17543.483: 65.1483% ( 105) 00:08:26.499 17543.483 - 17644.308: 66.6578% ( 114) 00:08:26.499 17644.308 - 17745.132: 68.0217% ( 103) 00:08:26.499 17745.132 - 17845.957: 69.3856% ( 103) 00:08:26.499 17845.957 - 17946.782: 70.7760% ( 105) 00:08:26.499 17946.782 - 18047.606: 72.1531% ( 104) 00:08:26.499 18047.606 - 18148.431: 73.5567% ( 106) 00:08:26.499 18148.431 - 18249.255: 74.7352% ( 89) 00:08:26.499 18249.255 - 18350.080: 76.0593% ( 100) 00:08:26.499 18350.080 - 18450.905: 77.3967% ( 101) 00:08:26.499 18450.905 - 18551.729: 78.7076% ( 99) 00:08:26.499 18551.729 - 18652.554: 79.8332% ( 85) 00:08:26.499 18652.554 - 18753.378: 81.0249% ( 90) 00:08:26.499 18753.378 - 18854.203: 82.1239% ( 83) 00:08:26.499 18854.203 - 18955.028: 83.0508% ( 70) 00:08:26.499 18955.028 - 19055.852: 84.0572% ( 76) 00:08:26.499 19055.852 - 19156.677: 84.9974% ( 71) 00:08:26.499 19156.677 - 19257.502: 85.9772% ( 74) 00:08:26.499 19257.502 - 19358.326: 86.9306% ( 72) 00:08:26.499 19358.326 - 19459.151: 87.8046% ( 66) 00:08:26.499 19459.151 - 19559.975: 88.5196% ( 54) 00:08:26.499 19559.975 - 19660.800: 89.2479% ( 55) 00:08:26.499 19660.800 - 19761.625: 90.0821% ( 63) 00:08:26.499 19761.625 - 19862.449: 90.9560% ( 66) 00:08:26.499 19862.449 - 19963.274: 91.7373% ( 59) 00:08:26.499 19963.274 - 20064.098: 92.3994% ( 50) 00:08:26.499 20064.098 - 20164.923: 93.1144% ( 54) 00:08:26.499 20164.923 - 20265.748: 93.7368% ( 47) 00:08:26.499 20265.748 - 20366.572: 94.3856% ( 49) 00:08:26.499 20366.572 - 20467.397: 94.9417% ( 42) 00:08:26.499 20467.397 - 20568.222: 95.4714% ( 40) 00:08:26.499 20568.222 - 20669.046: 95.9613% ( 37) 00:08:26.499 20669.046 - 20769.871: 96.4513% ( 37) 00:08:26.499 20769.871 - 20870.695: 96.8220% ( 28) 00:08:26.499 20870.695 - 20971.520: 97.1266% ( 23) 00:08:26.499 20971.520 - 21072.345: 97.3782% ( 19) 00:08:26.499 21072.345 - 21173.169: 97.5768% ( 15) 00:08:26.499 21173.169 - 21273.994: 97.7357% ( 12) 00:08:26.499 21273.994 - 21374.818: 97.8681% ( 10) 00:08:26.499 21374.818 - 21475.643: 97.9873% ( 9) 00:08:26.499 21475.643 - 21576.468: 98.0932% ( 8) 00:08:26.499 21576.468 - 21677.292: 98.1727% ( 6) 00:08:26.499 21677.292 - 21778.117: 98.2389% ( 5) 00:08:26.499 21778.117 - 21878.942: 98.3051% ( 5) 00:08:26.499 22988.012 - 23088.837: 98.3581% ( 4) 00:08:26.499 23088.837 - 23189.662: 98.4905% ( 10) 00:08:26.499 23189.662 - 23290.486: 98.5434% ( 4) 00:08:26.499 23290.486 - 23391.311: 98.5832% ( 3) 00:08:26.499 23391.311 - 23492.135: 98.6626% ( 6) 00:08:26.499 23492.135 - 23592.960: 98.7685% ( 8) 00:08:26.499 23592.960 - 23693.785: 98.8347% ( 5) 00:08:26.499 23693.785 - 23794.609: 98.9142% ( 6) 00:08:26.499 23794.609 - 23895.434: 98.9936% ( 6) 00:08:26.499 23895.434 - 23996.258: 99.0599% ( 5) 00:08:26.499 23996.258 - 24097.083: 99.1393% ( 6) 00:08:26.499 24097.083 - 24197.908: 99.1525% ( 1) 00:08:26.499 26819.348 - 27020.997: 99.2055% ( 4) 00:08:26.499 27020.997 - 27222.646: 99.3379% ( 10) 00:08:26.499 27222.646 - 27424.295: 99.4836% ( 11) 00:08:26.499 27424.295 - 27625.945: 99.6160% ( 10) 00:08:26.499 27625.945 - 27827.594: 99.7617% ( 11) 00:08:26.499 27827.594 - 28029.243: 99.9073% ( 11) 00:08:26.499 28029.243 - 28230.892: 100.0000% ( 7) 00:08:26.499 00:08:26.499 10:02:54 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:27.890 Initializing NVMe Controllers 00:08:27.890 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.890 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.890 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.890 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.890 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:27.890 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:27.890 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:27.890 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:27.890 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:27.890 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:27.890 Initialization complete. Launching workers. 00:08:27.890 ======================================================== 00:08:27.890 Latency(us) 00:08:27.890 Device Information : IOPS MiB/s Average min max 00:08:27.890 PCIE (0000:00:11.0) NSID 1 from core 0: 15851.23 185.76 8078.44 5566.01 27257.89 00:08:27.890 PCIE (0000:00:13.0) NSID 1 from core 0: 15851.23 185.76 8072.82 5196.59 27542.59 00:08:27.890 PCIE (0000:00:10.0) NSID 1 from core 0: 15851.23 185.76 8065.22 4775.29 27284.07 00:08:27.890 PCIE (0000:00:12.0) NSID 1 from core 0: 15851.23 185.76 8058.17 4482.70 26385.73 00:08:27.890 PCIE (0000:00:12.0) NSID 2 from core 0: 15851.23 185.76 8051.50 3768.29 27077.25 00:08:27.890 PCIE (0000:00:12.0) NSID 3 from core 0: 15851.23 185.76 8044.90 3476.01 26369.21 00:08:27.890 ======================================================== 00:08:27.890 Total : 95107.41 1114.54 8061.84 3476.01 27542.59 00:08:27.890 00:08:27.890 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:27.890 ================================================================================= 00:08:27.890 1.00000% : 7007.311us 00:08:27.890 10.00000% : 7309.785us 00:08:27.890 25.00000% : 7461.022us 00:08:27.890 50.00000% : 7713.083us 00:08:27.890 75.00000% : 8065.969us 00:08:27.890 90.00000% : 9124.628us 00:08:27.890 95.00000% : 10233.698us 00:08:27.890 98.00000% : 12451.840us 00:08:27.890 99.00000% : 14417.920us 00:08:27.890 99.50000% : 17845.957us 00:08:27.890 99.90000% : 26819.348us 00:08:27.890 99.99000% : 27222.646us 00:08:27.890 99.99900% : 27424.295us 00:08:27.890 99.99990% : 27424.295us 00:08:27.890 99.99999% : 27424.295us 00:08:27.890 00:08:27.890 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:27.890 ================================================================================= 00:08:27.890 1.00000% : 7007.311us 00:08:27.890 10.00000% : 7309.785us 00:08:27.890 25.00000% : 7511.434us 00:08:27.890 50.00000% : 7713.083us 00:08:27.890 75.00000% : 8065.969us 00:08:27.890 90.00000% : 9175.040us 00:08:27.890 95.00000% : 10183.286us 00:08:27.890 98.00000% : 12502.252us 00:08:27.890 99.00000% : 13208.025us 00:08:27.890 99.50000% : 18854.203us 00:08:27.890 99.90000% : 27222.646us 00:08:27.890 99.99000% : 27625.945us 00:08:27.890 99.99900% : 27625.945us 00:08:27.890 99.99990% : 27625.945us 00:08:27.890 99.99999% : 27625.945us 00:08:27.890 00:08:27.890 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:27.890 ================================================================================= 00:08:27.890 1.00000% : 6906.486us 00:08:27.890 10.00000% : 7208.960us 00:08:27.890 25.00000% : 7410.609us 00:08:27.890 50.00000% : 7763.495us 00:08:27.890 75.00000% : 8166.794us 00:08:27.890 90.00000% : 9074.215us 00:08:27.890 95.00000% : 10183.286us 00:08:27.890 98.00000% : 12250.191us 00:08:27.890 99.00000% : 13208.025us 00:08:27.890 99.50000% : 19055.852us 00:08:27.890 99.90000% : 27020.997us 00:08:27.890 99.99000% : 27424.295us 00:08:27.890 99.99900% : 27424.295us 00:08:27.890 99.99990% : 27424.295us 00:08:27.890 99.99999% : 27424.295us 00:08:27.890 00:08:27.890 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:27.890 ================================================================================= 00:08:27.890 1.00000% : 7007.311us 00:08:27.890 10.00000% : 7309.785us 00:08:27.890 25.00000% : 7461.022us 00:08:27.890 50.00000% : 7713.083us 00:08:27.890 75.00000% : 8065.969us 00:08:27.890 90.00000% : 9124.628us 00:08:27.890 95.00000% : 10233.698us 00:08:27.890 98.00000% : 11796.480us 00:08:27.890 99.00000% : 13208.025us 00:08:27.890 99.50000% : 19459.151us 00:08:27.890 99.90000% : 26214.400us 00:08:27.890 99.99000% : 26416.049us 00:08:27.890 99.99900% : 26416.049us 00:08:27.890 99.99990% : 26416.049us 00:08:27.890 99.99999% : 26416.049us 00:08:27.890 00:08:27.890 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:27.890 ================================================================================= 00:08:27.890 1.00000% : 6906.486us 00:08:27.890 10.00000% : 7309.785us 00:08:27.890 25.00000% : 7461.022us 00:08:27.890 50.00000% : 7713.083us 00:08:27.890 75.00000% : 8065.969us 00:08:27.890 90.00000% : 9074.215us 00:08:27.890 95.00000% : 10284.111us 00:08:27.890 98.00000% : 11796.480us 00:08:27.890 99.00000% : 13308.849us 00:08:27.890 99.50000% : 19459.151us 00:08:27.890 99.90000% : 27020.997us 00:08:27.890 99.99000% : 27222.646us 00:08:27.890 99.99900% : 27222.646us 00:08:27.890 99.99990% : 27222.646us 00:08:27.890 99.99999% : 27222.646us 00:08:27.890 00:08:27.890 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:27.890 ================================================================================= 00:08:27.890 1.00000% : 6906.486us 00:08:27.890 10.00000% : 7309.785us 00:08:27.890 25.00000% : 7461.022us 00:08:27.890 50.00000% : 7713.083us 00:08:27.890 75.00000% : 8065.969us 00:08:27.890 90.00000% : 9124.628us 00:08:27.890 95.00000% : 10183.286us 00:08:27.890 98.00000% : 11897.305us 00:08:27.890 99.00000% : 13308.849us 00:08:27.890 99.50000% : 19660.800us 00:08:27.890 99.90000% : 26416.049us 00:08:27.890 99.99000% : 26416.049us 00:08:27.890 99.99900% : 26416.049us 00:08:27.890 99.99990% : 26416.049us 00:08:27.890 99.99999% : 26416.049us 00:08:27.890 00:08:27.890 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:27.890 ============================================================================== 00:08:27.890 Range in us Cumulative IO count 00:08:27.890 5545.354 - 5570.560: 0.0063% ( 1) 00:08:27.890 5570.560 - 5595.766: 0.0504% ( 7) 00:08:27.890 5595.766 - 5620.972: 0.0882% ( 6) 00:08:27.890 5620.972 - 5646.178: 0.1134% ( 4) 00:08:27.890 5646.178 - 5671.385: 0.1953% ( 13) 00:08:27.890 5671.385 - 5696.591: 0.2268% ( 5) 00:08:27.890 5696.591 - 5721.797: 0.2331% ( 1) 00:08:27.890 5721.797 - 5747.003: 0.2457% ( 2) 00:08:27.891 5747.003 - 5772.209: 0.2583% ( 2) 00:08:27.891 5772.209 - 5797.415: 0.2709% ( 2) 00:08:27.891 5797.415 - 5822.622: 0.2898% ( 3) 00:08:27.891 5822.622 - 5847.828: 0.3150% ( 4) 00:08:27.891 5847.828 - 5873.034: 0.3339% ( 3) 00:08:27.891 5873.034 - 5898.240: 0.3528% ( 3) 00:08:27.891 5898.240 - 5923.446: 0.3843% ( 5) 00:08:27.891 5923.446 - 5948.652: 0.3969% ( 2) 00:08:27.891 5948.652 - 5973.858: 0.4032% ( 1) 00:08:27.891 6704.837 - 6755.249: 0.4158% ( 2) 00:08:27.891 6755.249 - 6805.662: 0.4221% ( 1) 00:08:27.891 6805.662 - 6856.074: 0.5040% ( 13) 00:08:27.891 6856.074 - 6906.486: 0.6237% ( 19) 00:08:27.891 6906.486 - 6956.898: 0.8443% ( 35) 00:08:27.891 6956.898 - 7007.311: 1.2034% ( 57) 00:08:27.891 7007.311 - 7057.723: 1.7011% ( 79) 00:08:27.891 7057.723 - 7108.135: 2.5643% ( 137) 00:08:27.891 7108.135 - 7158.548: 4.0197% ( 231) 00:08:27.891 7158.548 - 7208.960: 5.9917% ( 313) 00:08:27.891 7208.960 - 7259.372: 8.2661% ( 361) 00:08:27.891 7259.372 - 7309.785: 11.5423% ( 520) 00:08:27.891 7309.785 - 7360.197: 15.6943% ( 659) 00:08:27.891 7360.197 - 7410.609: 20.2117% ( 717) 00:08:27.891 7410.609 - 7461.022: 25.1071% ( 777) 00:08:27.891 7461.022 - 7511.434: 30.0844% ( 790) 00:08:27.891 7511.434 - 7561.846: 35.5721% ( 871) 00:08:27.891 7561.846 - 7612.258: 41.1668% ( 888) 00:08:27.891 7612.258 - 7662.671: 46.3773% ( 827) 00:08:27.891 7662.671 - 7713.083: 51.2160% ( 768) 00:08:27.891 7713.083 - 7763.495: 55.9917% ( 758) 00:08:27.891 7763.495 - 7813.908: 60.7107% ( 749) 00:08:27.891 7813.908 - 7864.320: 64.7429% ( 640) 00:08:27.891 7864.320 - 7914.732: 68.4476% ( 588) 00:08:27.891 7914.732 - 7965.145: 71.8246% ( 536) 00:08:27.891 7965.145 - 8015.557: 74.4267% ( 413) 00:08:27.891 8015.557 - 8065.969: 76.7641% ( 371) 00:08:27.891 8065.969 - 8116.382: 78.4589% ( 269) 00:08:27.891 8116.382 - 8166.794: 79.7946% ( 212) 00:08:27.891 8166.794 - 8217.206: 80.8846% ( 173) 00:08:27.891 8217.206 - 8267.618: 81.7225% ( 133) 00:08:27.891 8267.618 - 8318.031: 82.6676% ( 150) 00:08:27.891 8318.031 - 8368.443: 83.3417% ( 107) 00:08:27.891 8368.443 - 8418.855: 83.8962% ( 88) 00:08:27.891 8418.855 - 8469.268: 84.4695% ( 91) 00:08:27.891 8469.268 - 8519.680: 85.2004% ( 116) 00:08:27.891 8519.680 - 8570.092: 86.0383% ( 133) 00:08:27.891 8570.092 - 8620.505: 86.4982% ( 73) 00:08:27.891 8620.505 - 8670.917: 86.8700% ( 59) 00:08:27.891 8670.917 - 8721.329: 87.2039% ( 53) 00:08:27.891 8721.329 - 8771.742: 87.6008% ( 63) 00:08:27.891 8771.742 - 8822.154: 88.2371% ( 101) 00:08:27.891 8822.154 - 8872.566: 88.6278% ( 62) 00:08:27.891 8872.566 - 8922.978: 88.8861% ( 41) 00:08:27.891 8922.978 - 8973.391: 89.1507% ( 42) 00:08:27.891 8973.391 - 9023.803: 89.5224% ( 59) 00:08:27.891 9023.803 - 9074.215: 89.8311% ( 49) 00:08:27.891 9074.215 - 9124.628: 90.1588% ( 52) 00:08:27.891 9124.628 - 9175.040: 90.4801% ( 51) 00:08:27.891 9175.040 - 9225.452: 90.8833% ( 64) 00:08:27.891 9225.452 - 9275.865: 91.2172% ( 53) 00:08:27.891 9275.865 - 9326.277: 91.4882% ( 43) 00:08:27.891 9326.277 - 9376.689: 91.7906% ( 48) 00:08:27.891 9376.689 - 9427.102: 92.0426% ( 40) 00:08:27.891 9427.102 - 9477.514: 92.2946% ( 40) 00:08:27.891 9477.514 - 9527.926: 92.5151% ( 35) 00:08:27.891 9527.926 - 9578.338: 92.8238% ( 49) 00:08:27.891 9578.338 - 9628.751: 93.0066% ( 29) 00:08:27.891 9628.751 - 9679.163: 93.2901% ( 45) 00:08:27.891 9679.163 - 9729.575: 93.4665% ( 28) 00:08:27.891 9729.575 - 9779.988: 93.6492% ( 29) 00:08:27.891 9779.988 - 9830.400: 93.7752% ( 20) 00:08:27.891 9830.400 - 9880.812: 93.9201% ( 23) 00:08:27.891 9880.812 - 9931.225: 94.1658% ( 39) 00:08:27.891 9931.225 - 9981.637: 94.3044% ( 22) 00:08:27.891 9981.637 - 10032.049: 94.4178% ( 18) 00:08:27.891 10032.049 - 10082.462: 94.5439% ( 20) 00:08:27.891 10082.462 - 10132.874: 94.7014% ( 25) 00:08:27.891 10132.874 - 10183.286: 94.8715% ( 27) 00:08:27.891 10183.286 - 10233.698: 95.0227% ( 24) 00:08:27.891 10233.698 - 10284.111: 95.1361% ( 18) 00:08:27.891 10284.111 - 10334.523: 95.2306% ( 15) 00:08:27.891 10334.523 - 10384.935: 95.3692% ( 22) 00:08:27.891 10384.935 - 10435.348: 95.4889% ( 19) 00:08:27.891 10435.348 - 10485.760: 95.6149% ( 20) 00:08:27.891 10485.760 - 10536.172: 95.7346% ( 19) 00:08:27.891 10536.172 - 10586.585: 95.8165% ( 13) 00:08:27.891 10586.585 - 10636.997: 95.9236% ( 17) 00:08:27.891 10636.997 - 10687.409: 96.0433% ( 19) 00:08:27.891 10687.409 - 10737.822: 96.1316% ( 14) 00:08:27.891 10737.822 - 10788.234: 96.1946% ( 10) 00:08:27.891 10788.234 - 10838.646: 96.3080% ( 18) 00:08:27.891 10838.646 - 10889.058: 96.4025% ( 15) 00:08:27.891 10889.058 - 10939.471: 96.4970% ( 15) 00:08:27.891 10939.471 - 10989.883: 96.6104% ( 18) 00:08:27.891 10989.883 - 11040.295: 96.7049% ( 15) 00:08:27.891 11040.295 - 11090.708: 96.7805% ( 12) 00:08:27.891 11090.708 - 11141.120: 96.8750% ( 15) 00:08:27.891 11141.120 - 11191.532: 96.9191% ( 7) 00:08:27.891 11191.532 - 11241.945: 96.9758% ( 9) 00:08:27.891 11241.945 - 11292.357: 97.0199% ( 7) 00:08:27.891 11292.357 - 11342.769: 97.0703% ( 8) 00:08:27.891 11342.769 - 11393.182: 97.1207% ( 8) 00:08:27.891 11393.182 - 11443.594: 97.1774% ( 9) 00:08:27.891 11443.594 - 11494.006: 97.2593% ( 13) 00:08:27.891 11494.006 - 11544.418: 97.3349% ( 12) 00:08:27.891 11544.418 - 11594.831: 97.3979% ( 10) 00:08:27.891 11594.831 - 11645.243: 97.4672% ( 11) 00:08:27.891 11645.243 - 11695.655: 97.5050% ( 6) 00:08:27.891 11695.655 - 11746.068: 97.5743% ( 11) 00:08:27.891 11746.068 - 11796.480: 97.6941% ( 19) 00:08:27.891 11796.480 - 11846.892: 97.7382% ( 7) 00:08:27.891 11846.892 - 11897.305: 97.7886% ( 8) 00:08:27.891 11897.305 - 11947.717: 97.8453% ( 9) 00:08:27.891 11947.717 - 11998.129: 97.8894% ( 7) 00:08:27.891 11998.129 - 12048.542: 97.9272% ( 6) 00:08:27.891 12048.542 - 12098.954: 97.9650% ( 6) 00:08:27.891 12098.954 - 12149.366: 97.9776% ( 2) 00:08:27.891 12149.366 - 12199.778: 97.9839% ( 1) 00:08:27.891 12351.015 - 12401.428: 97.9965% ( 2) 00:08:27.891 12401.428 - 12451.840: 98.0280% ( 5) 00:08:27.891 12451.840 - 12502.252: 98.0532% ( 4) 00:08:27.891 12502.252 - 12552.665: 98.0784% ( 4) 00:08:27.891 12552.665 - 12603.077: 98.1351% ( 9) 00:08:27.891 12603.077 - 12653.489: 98.2674% ( 21) 00:08:27.891 12653.489 - 12703.902: 98.2863% ( 3) 00:08:27.891 12703.902 - 12754.314: 98.3115% ( 4) 00:08:27.891 12754.314 - 12804.726: 98.3682% ( 9) 00:08:27.891 12804.726 - 12855.138: 98.4060% ( 6) 00:08:27.891 12855.138 - 12905.551: 98.4501% ( 7) 00:08:27.891 12905.551 - 13006.375: 98.5131% ( 10) 00:08:27.891 13006.375 - 13107.200: 98.5572% ( 7) 00:08:27.891 13107.200 - 13208.025: 98.6013% ( 7) 00:08:27.891 13208.025 - 13308.849: 98.6391% ( 6) 00:08:27.891 13308.849 - 13409.674: 98.6832% ( 7) 00:08:27.891 13409.674 - 13510.498: 98.7210% ( 6) 00:08:27.891 13510.498 - 13611.323: 98.7651% ( 7) 00:08:27.891 13611.323 - 13712.148: 98.7903% ( 4) 00:08:27.891 14014.622 - 14115.446: 98.7966% ( 1) 00:08:27.891 14115.446 - 14216.271: 98.8155% ( 3) 00:08:27.891 14216.271 - 14317.095: 98.8785% ( 10) 00:08:27.891 14317.095 - 14417.920: 99.0864% ( 33) 00:08:27.891 14417.920 - 14518.745: 99.1431% ( 9) 00:08:27.891 14518.745 - 14619.569: 99.1746% ( 5) 00:08:27.891 14619.569 - 14720.394: 99.1935% ( 3) 00:08:27.891 16938.535 - 17039.360: 99.1998% ( 1) 00:08:27.891 17039.360 - 17140.185: 99.2818% ( 13) 00:08:27.891 17140.185 - 17241.009: 99.3448% ( 10) 00:08:27.891 17241.009 - 17341.834: 99.4078% ( 10) 00:08:27.891 17341.834 - 17442.658: 99.4519% ( 7) 00:08:27.891 17442.658 - 17543.483: 99.4645% ( 2) 00:08:27.891 17644.308 - 17745.132: 99.4960% ( 5) 00:08:27.891 17745.132 - 17845.957: 99.5275% ( 5) 00:08:27.891 17845.957 - 17946.782: 99.5590% ( 5) 00:08:27.891 17946.782 - 18047.606: 99.5905% ( 5) 00:08:27.891 18047.606 - 18148.431: 99.5968% ( 1) 00:08:27.891 25206.154 - 25306.978: 99.6031% ( 1) 00:08:27.891 25306.978 - 25407.803: 99.6094% ( 1) 00:08:27.891 25609.452 - 25710.277: 99.6220% ( 2) 00:08:27.892 25811.102 - 26012.751: 99.6472% ( 4) 00:08:27.892 26012.751 - 26214.400: 99.7543% ( 17) 00:08:27.892 26214.400 - 26416.049: 99.8299% ( 12) 00:08:27.892 26416.049 - 26617.698: 99.8677% ( 6) 00:08:27.892 26617.698 - 26819.348: 99.9181% ( 8) 00:08:27.892 26819.348 - 27020.997: 99.9622% ( 7) 00:08:27.892 27020.997 - 27222.646: 99.9937% ( 5) 00:08:27.892 27222.646 - 27424.295: 100.0000% ( 1) 00:08:27.892 00:08:27.892 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:27.892 ============================================================================== 00:08:27.892 Range in us Cumulative IO count 00:08:27.892 5192.468 - 5217.674: 0.0252% ( 4) 00:08:27.892 5217.674 - 5242.880: 0.0567% ( 5) 00:08:27.892 5242.880 - 5268.086: 0.0819% ( 4) 00:08:27.892 5268.086 - 5293.292: 0.1323% ( 8) 00:08:27.892 5293.292 - 5318.498: 0.2142% ( 13) 00:08:27.892 5318.498 - 5343.705: 0.2520% ( 6) 00:08:27.892 5343.705 - 5368.911: 0.2646% ( 2) 00:08:27.892 5368.911 - 5394.117: 0.2835% ( 3) 00:08:27.892 5394.117 - 5419.323: 0.2961% ( 2) 00:08:27.892 5419.323 - 5444.529: 0.3087% ( 2) 00:08:27.892 5444.529 - 5469.735: 0.3213% ( 2) 00:08:27.892 5469.735 - 5494.942: 0.3402% ( 3) 00:08:27.892 5494.942 - 5520.148: 0.3528% ( 2) 00:08:27.892 5520.148 - 5545.354: 0.3654% ( 2) 00:08:27.892 5545.354 - 5570.560: 0.3780% ( 2) 00:08:27.892 5570.560 - 5595.766: 0.3969% ( 3) 00:08:27.892 5595.766 - 5620.972: 0.4032% ( 1) 00:08:27.892 6704.837 - 6755.249: 0.4095% ( 1) 00:08:27.892 6805.662 - 6856.074: 0.4347% ( 4) 00:08:27.892 6856.074 - 6906.486: 0.5418% ( 17) 00:08:27.892 6906.486 - 6956.898: 0.7749% ( 37) 00:08:27.892 6956.898 - 7007.311: 1.1971% ( 67) 00:08:27.892 7007.311 - 7057.723: 1.8523% ( 104) 00:08:27.892 7057.723 - 7108.135: 2.8289% ( 155) 00:08:27.892 7108.135 - 7158.548: 4.3347% ( 239) 00:08:27.892 7158.548 - 7208.960: 6.3886% ( 326) 00:08:27.892 7208.960 - 7259.372: 9.2868% ( 460) 00:08:27.892 7259.372 - 7309.785: 12.4181% ( 497) 00:08:27.892 7309.785 - 7360.197: 15.8581% ( 546) 00:08:27.892 7360.197 - 7410.609: 19.8526% ( 634) 00:08:27.892 7410.609 - 7461.022: 24.5842% ( 751) 00:08:27.892 7461.022 - 7511.434: 30.3679% ( 918) 00:08:27.892 7511.434 - 7561.846: 35.7926% ( 861) 00:08:27.892 7561.846 - 7612.258: 41.5008% ( 906) 00:08:27.892 7612.258 - 7662.671: 46.5222% ( 797) 00:08:27.892 7662.671 - 7713.083: 51.6507% ( 814) 00:08:27.892 7713.083 - 7763.495: 56.2374% ( 728) 00:08:27.892 7763.495 - 7813.908: 60.5658% ( 687) 00:08:27.892 7813.908 - 7864.320: 64.3523% ( 601) 00:08:27.892 7864.320 - 7914.732: 67.5907% ( 514) 00:08:27.892 7914.732 - 7965.145: 70.8291% ( 514) 00:08:27.892 7965.145 - 8015.557: 73.4627% ( 418) 00:08:27.892 8015.557 - 8065.969: 75.9262% ( 391) 00:08:27.892 8065.969 - 8116.382: 77.6021% ( 266) 00:08:27.892 8116.382 - 8166.794: 79.1394% ( 244) 00:08:27.892 8166.794 - 8217.206: 80.7145% ( 250) 00:08:27.892 8217.206 - 8267.618: 82.0502% ( 212) 00:08:27.892 8267.618 - 8318.031: 83.2094% ( 184) 00:08:27.892 8318.031 - 8368.443: 84.1104% ( 143) 00:08:27.892 8368.443 - 8418.855: 85.1184% ( 160) 00:08:27.892 8418.855 - 8469.268: 85.7485% ( 100) 00:08:27.892 8469.268 - 8519.680: 86.1580% ( 65) 00:08:27.892 8519.680 - 8570.092: 86.5990% ( 70) 00:08:27.892 8570.092 - 8620.505: 87.0653% ( 74) 00:08:27.892 8620.505 - 8670.917: 87.3362% ( 43) 00:08:27.892 8670.917 - 8721.329: 87.5378% ( 32) 00:08:27.892 8721.329 - 8771.742: 87.7583% ( 35) 00:08:27.892 8771.742 - 8822.154: 88.1237% ( 58) 00:08:27.892 8822.154 - 8872.566: 88.3506% ( 36) 00:08:27.892 8872.566 - 8922.978: 88.7034% ( 56) 00:08:27.892 8922.978 - 8973.391: 89.0247% ( 51) 00:08:27.892 8973.391 - 9023.803: 89.3271% ( 48) 00:08:27.892 9023.803 - 9074.215: 89.6295% ( 48) 00:08:27.892 9074.215 - 9124.628: 89.8690% ( 38) 00:08:27.892 9124.628 - 9175.040: 90.1777% ( 49) 00:08:27.892 9175.040 - 9225.452: 90.4360% ( 41) 00:08:27.892 9225.452 - 9275.865: 90.7699% ( 53) 00:08:27.892 9275.865 - 9326.277: 91.0093% ( 38) 00:08:27.892 9326.277 - 9376.689: 91.2676% ( 41) 00:08:27.892 9376.689 - 9427.102: 91.7213% ( 72) 00:08:27.892 9427.102 - 9477.514: 92.1686% ( 71) 00:08:27.892 9477.514 - 9527.926: 92.5907% ( 67) 00:08:27.892 9527.926 - 9578.338: 92.8490% ( 41) 00:08:27.892 9578.338 - 9628.751: 93.0696% ( 35) 00:08:27.892 9628.751 - 9679.163: 93.2397% ( 27) 00:08:27.892 9679.163 - 9729.575: 93.3657% ( 20) 00:08:27.892 9729.575 - 9779.988: 93.5547% ( 30) 00:08:27.892 9779.988 - 9830.400: 93.6870% ( 21) 00:08:27.892 9830.400 - 9880.812: 93.8256% ( 22) 00:08:27.892 9880.812 - 9931.225: 94.0146% ( 30) 00:08:27.892 9931.225 - 9981.637: 94.2729% ( 41) 00:08:27.892 9981.637 - 10032.049: 94.4556% ( 29) 00:08:27.892 10032.049 - 10082.462: 94.6573% ( 32) 00:08:27.892 10082.462 - 10132.874: 94.8778% ( 35) 00:08:27.892 10132.874 - 10183.286: 95.0038% ( 20) 00:08:27.892 10183.286 - 10233.698: 95.1046% ( 16) 00:08:27.892 10233.698 - 10284.111: 95.1802% ( 12) 00:08:27.892 10284.111 - 10334.523: 95.2432% ( 10) 00:08:27.892 10334.523 - 10384.935: 95.3566% ( 18) 00:08:27.892 10384.935 - 10435.348: 95.5078% ( 24) 00:08:27.892 10435.348 - 10485.760: 95.6653% ( 25) 00:08:27.892 10485.760 - 10536.172: 95.8795% ( 34) 00:08:27.892 10536.172 - 10586.585: 96.0055% ( 20) 00:08:27.892 10586.585 - 10636.997: 96.1253% ( 19) 00:08:27.892 10636.997 - 10687.409: 96.2135% ( 14) 00:08:27.892 10687.409 - 10737.822: 96.3206% ( 17) 00:08:27.892 10737.822 - 10788.234: 96.5285% ( 33) 00:08:27.892 10788.234 - 10838.646: 96.6167% ( 14) 00:08:27.892 10838.646 - 10889.058: 96.7049% ( 14) 00:08:27.892 10889.058 - 10939.471: 96.8498% ( 23) 00:08:27.892 10939.471 - 10989.883: 97.0010% ( 24) 00:08:27.892 10989.883 - 11040.295: 97.0955% ( 15) 00:08:27.892 11040.295 - 11090.708: 97.1900% ( 15) 00:08:27.892 11090.708 - 11141.120: 97.3034% ( 18) 00:08:27.892 11141.120 - 11191.532: 97.3727% ( 11) 00:08:27.892 11191.532 - 11241.945: 97.4231% ( 8) 00:08:27.892 11241.945 - 11292.357: 97.4672% ( 7) 00:08:27.892 11292.357 - 11342.769: 97.5113% ( 7) 00:08:27.892 11342.769 - 11393.182: 97.5554% ( 7) 00:08:27.892 11393.182 - 11443.594: 97.5743% ( 3) 00:08:27.892 11443.594 - 11494.006: 97.5806% ( 1) 00:08:27.892 12048.542 - 12098.954: 97.5869% ( 1) 00:08:27.892 12098.954 - 12149.366: 97.6184% ( 5) 00:08:27.892 12149.366 - 12199.778: 97.6562% ( 6) 00:08:27.892 12199.778 - 12250.191: 97.7004% ( 7) 00:08:27.892 12250.191 - 12300.603: 97.8453% ( 23) 00:08:27.892 12300.603 - 12351.015: 97.8831% ( 6) 00:08:27.892 12351.015 - 12401.428: 97.9209% ( 6) 00:08:27.892 12401.428 - 12451.840: 97.9398% ( 3) 00:08:27.892 12451.840 - 12502.252: 98.0028% ( 10) 00:08:27.892 12502.252 - 12552.665: 98.0658% ( 10) 00:08:27.892 12552.665 - 12603.077: 98.1477% ( 13) 00:08:27.892 12603.077 - 12653.489: 98.2170% ( 11) 00:08:27.892 12653.489 - 12703.902: 98.2800% ( 10) 00:08:27.892 12703.902 - 12754.314: 98.3556% ( 12) 00:08:27.892 12754.314 - 12804.726: 98.4753% ( 19) 00:08:27.892 12804.726 - 12855.138: 98.5320% ( 9) 00:08:27.892 12855.138 - 12905.551: 98.5824% ( 8) 00:08:27.892 12905.551 - 13006.375: 98.8344% ( 40) 00:08:27.892 13006.375 - 13107.200: 98.9289% ( 15) 00:08:27.892 13107.200 - 13208.025: 99.0045% ( 12) 00:08:27.892 13208.025 - 13308.849: 99.0675% ( 10) 00:08:27.892 13308.849 - 13409.674: 99.1368% ( 11) 00:08:27.892 13409.674 - 13510.498: 99.1620% ( 4) 00:08:27.892 13510.498 - 13611.323: 99.1809% ( 3) 00:08:27.892 13611.323 - 13712.148: 99.1935% ( 2) 00:08:27.892 17241.009 - 17341.834: 99.1998% ( 1) 00:08:27.892 17341.834 - 17442.658: 99.2188% ( 3) 00:08:27.892 17442.658 - 17543.483: 99.2440% ( 4) 00:08:27.892 17543.483 - 17644.308: 99.2629% ( 3) 00:08:27.892 17644.308 - 17745.132: 99.2818% ( 3) 00:08:27.892 17745.132 - 17845.957: 99.3070% ( 4) 00:08:27.892 17845.957 - 17946.782: 99.3259% ( 3) 00:08:27.892 17946.782 - 18047.606: 99.3448% ( 3) 00:08:27.892 18047.606 - 18148.431: 99.3700% ( 4) 00:08:27.892 18148.431 - 18249.255: 99.3889% ( 3) 00:08:27.892 18249.255 - 18350.080: 99.4078% ( 3) 00:08:27.893 18350.080 - 18450.905: 99.4330% ( 4) 00:08:27.893 18450.905 - 18551.729: 99.4519% ( 3) 00:08:27.893 18551.729 - 18652.554: 99.4771% ( 4) 00:08:27.893 18652.554 - 18753.378: 99.4960% ( 3) 00:08:27.893 18753.378 - 18854.203: 99.5212% ( 4) 00:08:27.893 18854.203 - 18955.028: 99.5464% ( 4) 00:08:27.893 18955.028 - 19055.852: 99.5716% ( 4) 00:08:27.893 19055.852 - 19156.677: 99.5968% ( 4) 00:08:27.893 25811.102 - 26012.751: 99.6346% ( 6) 00:08:27.893 26012.751 - 26214.400: 99.6850% ( 8) 00:08:27.893 26214.400 - 26416.049: 99.7354% ( 8) 00:08:27.893 26416.049 - 26617.698: 99.7795% ( 7) 00:08:27.893 26617.698 - 26819.348: 99.8299% ( 8) 00:08:27.893 26819.348 - 27020.997: 99.8740% ( 7) 00:08:27.893 27020.997 - 27222.646: 99.9244% ( 8) 00:08:27.893 27222.646 - 27424.295: 99.9685% ( 7) 00:08:27.893 27424.295 - 27625.945: 100.0000% ( 5) 00:08:27.893 00:08:27.893 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:27.893 ============================================================================== 00:08:27.893 Range in us Cumulative IO count 00:08:27.893 4763.963 - 4789.169: 0.0378% ( 6) 00:08:27.893 4789.169 - 4814.375: 0.0756% ( 6) 00:08:27.893 4814.375 - 4839.582: 0.1134% ( 6) 00:08:27.893 4839.582 - 4864.788: 0.1449% ( 5) 00:08:27.893 4864.788 - 4889.994: 0.1701% ( 4) 00:08:27.893 4889.994 - 4915.200: 0.1827% ( 2) 00:08:27.893 4915.200 - 4940.406: 0.1890% ( 1) 00:08:27.893 4940.406 - 4965.612: 0.1953% ( 1) 00:08:27.893 4990.818 - 5016.025: 0.2079% ( 2) 00:08:27.893 5016.025 - 5041.231: 0.2268% ( 3) 00:08:27.893 5041.231 - 5066.437: 0.2646% ( 6) 00:08:27.893 5066.437 - 5091.643: 0.2898% ( 4) 00:08:27.893 5091.643 - 5116.849: 0.3024% ( 2) 00:08:27.893 5116.849 - 5142.055: 0.3150% ( 2) 00:08:27.893 5142.055 - 5167.262: 0.3276% ( 2) 00:08:27.893 5167.262 - 5192.468: 0.3339% ( 1) 00:08:27.893 5242.880 - 5268.086: 0.3402% ( 1) 00:08:27.893 5268.086 - 5293.292: 0.3591% ( 3) 00:08:27.893 5293.292 - 5318.498: 0.3717% ( 2) 00:08:27.893 5318.498 - 5343.705: 0.3843% ( 2) 00:08:27.893 5343.705 - 5368.911: 0.4032% ( 3) 00:08:27.893 6654.425 - 6704.837: 0.4221% ( 3) 00:08:27.893 6704.837 - 6755.249: 0.4473% ( 4) 00:08:27.893 6755.249 - 6805.662: 0.5103% ( 10) 00:08:27.893 6805.662 - 6856.074: 0.7876% ( 44) 00:08:27.893 6856.074 - 6906.486: 1.1782% ( 62) 00:08:27.893 6906.486 - 6956.898: 2.0791% ( 143) 00:08:27.893 6956.898 - 7007.311: 3.0809% ( 159) 00:08:27.893 7007.311 - 7057.723: 4.5111% ( 227) 00:08:27.893 7057.723 - 7108.135: 6.6595% ( 341) 00:08:27.893 7108.135 - 7158.548: 9.3246% ( 423) 00:08:27.893 7158.548 - 7208.960: 12.0149% ( 427) 00:08:27.893 7208.960 - 7259.372: 15.0517% ( 482) 00:08:27.893 7259.372 - 7309.785: 18.3720% ( 527) 00:08:27.893 7309.785 - 7360.197: 21.7742% ( 540) 00:08:27.893 7360.197 - 7410.609: 25.7308% ( 628) 00:08:27.893 7410.609 - 7461.022: 29.9521% ( 670) 00:08:27.893 7461.022 - 7511.434: 33.8458% ( 618) 00:08:27.893 7511.434 - 7561.846: 37.8150% ( 630) 00:08:27.893 7561.846 - 7612.258: 41.5449% ( 592) 00:08:27.893 7612.258 - 7662.671: 45.6590% ( 653) 00:08:27.893 7662.671 - 7713.083: 49.7480% ( 649) 00:08:27.893 7713.083 - 7763.495: 53.6857% ( 625) 00:08:27.893 7763.495 - 7813.908: 57.5605% ( 615) 00:08:27.893 7813.908 - 7864.320: 61.0950% ( 561) 00:08:27.893 7864.320 - 7914.732: 64.5980% ( 556) 00:08:27.893 7914.732 - 7965.145: 67.3072% ( 430) 00:08:27.893 7965.145 - 8015.557: 69.7392% ( 386) 00:08:27.893 8015.557 - 8065.969: 71.9002% ( 343) 00:08:27.893 8065.969 - 8116.382: 73.9982% ( 333) 00:08:27.893 8116.382 - 8166.794: 75.9829% ( 315) 00:08:27.893 8166.794 - 8217.206: 77.7218% ( 276) 00:08:27.893 8217.206 - 8267.618: 79.1709% ( 230) 00:08:27.893 8267.618 - 8318.031: 80.6074% ( 228) 00:08:27.893 8318.031 - 8368.443: 81.8233% ( 193) 00:08:27.893 8368.443 - 8418.855: 82.9889% ( 185) 00:08:27.893 8418.855 - 8469.268: 84.1293% ( 181) 00:08:27.893 8469.268 - 8519.680: 85.0176% ( 141) 00:08:27.893 8519.680 - 8570.092: 85.7170% ( 111) 00:08:27.893 8570.092 - 8620.505: 86.2210% ( 80) 00:08:27.893 8620.505 - 8670.917: 86.6557% ( 69) 00:08:27.893 8670.917 - 8721.329: 87.1094% ( 72) 00:08:27.893 8721.329 - 8771.742: 87.6512% ( 86) 00:08:27.893 8771.742 - 8822.154: 88.1741% ( 83) 00:08:27.893 8822.154 - 8872.566: 88.5963% ( 67) 00:08:27.893 8872.566 - 8922.978: 88.9050% ( 49) 00:08:27.893 8922.978 - 8973.391: 89.3019% ( 63) 00:08:27.893 8973.391 - 9023.803: 89.6736% ( 59) 00:08:27.893 9023.803 - 9074.215: 90.0265% ( 56) 00:08:27.893 9074.215 - 9124.628: 90.2470% ( 35) 00:08:27.893 9124.628 - 9175.040: 90.4675% ( 35) 00:08:27.893 9175.040 - 9225.452: 90.6943% ( 36) 00:08:27.893 9225.452 - 9275.865: 90.9652% ( 43) 00:08:27.893 9275.865 - 9326.277: 91.3117% ( 55) 00:08:27.893 9326.277 - 9376.689: 91.6898% ( 60) 00:08:27.893 9376.689 - 9427.102: 92.0237% ( 53) 00:08:27.893 9427.102 - 9477.514: 92.2316% ( 33) 00:08:27.893 9477.514 - 9527.926: 92.4332% ( 32) 00:08:27.893 9527.926 - 9578.338: 92.6978% ( 42) 00:08:27.893 9578.338 - 9628.751: 92.9372% ( 38) 00:08:27.893 9628.751 - 9679.163: 93.1326% ( 31) 00:08:27.893 9679.163 - 9729.575: 93.2838% ( 24) 00:08:27.893 9729.575 - 9779.988: 93.4098% ( 20) 00:08:27.893 9779.988 - 9830.400: 93.5736% ( 26) 00:08:27.893 9830.400 - 9880.812: 93.7626% ( 30) 00:08:27.893 9880.812 - 9931.225: 93.9390% ( 28) 00:08:27.893 9931.225 - 9981.637: 94.1532% ( 34) 00:08:27.893 9981.637 - 10032.049: 94.3863% ( 37) 00:08:27.893 10032.049 - 10082.462: 94.6447% ( 41) 00:08:27.893 10082.462 - 10132.874: 94.8589% ( 34) 00:08:27.893 10132.874 - 10183.286: 95.0605% ( 32) 00:08:27.893 10183.286 - 10233.698: 95.2180% ( 25) 00:08:27.893 10233.698 - 10284.111: 95.3377% ( 19) 00:08:27.893 10284.111 - 10334.523: 95.4574% ( 19) 00:08:27.893 10334.523 - 10384.935: 95.5771% ( 19) 00:08:27.893 10384.935 - 10435.348: 95.6905% ( 18) 00:08:27.893 10435.348 - 10485.760: 95.8039% ( 18) 00:08:27.893 10485.760 - 10536.172: 95.8732% ( 11) 00:08:27.893 10536.172 - 10586.585: 95.9740% ( 16) 00:08:27.893 10586.585 - 10636.997: 96.0622% ( 14) 00:08:27.893 10636.997 - 10687.409: 96.1820% ( 19) 00:08:27.893 10687.409 - 10737.822: 96.3458% ( 26) 00:08:27.893 10737.822 - 10788.234: 96.4655% ( 19) 00:08:27.893 10788.234 - 10838.646: 96.5978% ( 21) 00:08:27.893 10838.646 - 10889.058: 96.6797% ( 13) 00:08:27.893 10889.058 - 10939.471: 96.7679% ( 14) 00:08:27.893 10939.471 - 10989.883: 96.8750% ( 17) 00:08:27.893 10989.883 - 11040.295: 96.9821% ( 17) 00:08:27.893 11040.295 - 11090.708: 97.0703% ( 14) 00:08:27.893 11090.708 - 11141.120: 97.1270% ( 9) 00:08:27.893 11141.120 - 11191.532: 97.2089% ( 13) 00:08:27.893 11191.532 - 11241.945: 97.2845% ( 12) 00:08:27.893 11241.945 - 11292.357: 97.3412% ( 9) 00:08:27.893 11292.357 - 11342.769: 97.4042% ( 10) 00:08:27.893 11342.769 - 11393.182: 97.4294% ( 4) 00:08:27.893 11393.182 - 11443.594: 97.4672% ( 6) 00:08:27.893 11443.594 - 11494.006: 97.4924% ( 4) 00:08:27.893 11494.006 - 11544.418: 97.5302% ( 6) 00:08:27.893 11594.831 - 11645.243: 97.5680% ( 6) 00:08:27.893 11645.243 - 11695.655: 97.6121% ( 7) 00:08:27.893 11695.655 - 11746.068: 97.6562% ( 7) 00:08:27.893 11746.068 - 11796.480: 97.7004% ( 7) 00:08:27.893 11796.480 - 11846.892: 97.7382% ( 6) 00:08:27.893 11846.892 - 11897.305: 97.7760% ( 6) 00:08:27.893 11897.305 - 11947.717: 97.8138% ( 6) 00:08:27.893 11947.717 - 11998.129: 97.8579% ( 7) 00:08:27.893 11998.129 - 12048.542: 97.9209% ( 10) 00:08:27.893 12048.542 - 12098.954: 97.9398% ( 3) 00:08:27.893 12098.954 - 12149.366: 97.9587% ( 3) 00:08:27.893 12149.366 - 12199.778: 97.9713% ( 2) 00:08:27.893 12199.778 - 12250.191: 98.0091% ( 6) 00:08:27.893 12250.191 - 12300.603: 98.0343% ( 4) 00:08:27.893 12300.603 - 12351.015: 98.0658% ( 5) 00:08:27.893 12351.015 - 12401.428: 98.1162% ( 8) 00:08:27.893 12401.428 - 12451.840: 98.1918% ( 12) 00:08:27.893 12451.840 - 12502.252: 98.2548% ( 10) 00:08:27.893 12502.252 - 12552.665: 98.3556% ( 16) 00:08:27.893 12552.665 - 12603.077: 98.4690% ( 18) 00:08:27.894 12603.077 - 12653.489: 98.5320% ( 10) 00:08:27.894 12653.489 - 12703.902: 98.6076% ( 12) 00:08:27.894 12703.902 - 12754.314: 98.6643% ( 9) 00:08:27.894 12754.314 - 12804.726: 98.6895% ( 4) 00:08:27.894 12804.726 - 12855.138: 98.7273% ( 6) 00:08:27.894 12855.138 - 12905.551: 98.7714% ( 7) 00:08:27.894 12905.551 - 13006.375: 98.8785% ( 17) 00:08:27.894 13006.375 - 13107.200: 98.9667% ( 14) 00:08:27.894 13107.200 - 13208.025: 99.0549% ( 14) 00:08:27.894 13208.025 - 13308.849: 99.1431% ( 14) 00:08:27.894 13308.849 - 13409.674: 99.1935% ( 8) 00:08:27.894 17644.308 - 17745.132: 99.2061% ( 2) 00:08:27.894 17745.132 - 17845.957: 99.2566% ( 8) 00:08:27.894 17845.957 - 17946.782: 99.2692% ( 2) 00:08:27.894 17946.782 - 18047.606: 99.3007% ( 5) 00:08:27.894 18047.606 - 18148.431: 99.3637% ( 10) 00:08:27.894 18148.431 - 18249.255: 99.3952% ( 5) 00:08:27.894 18249.255 - 18350.080: 99.4141% ( 3) 00:08:27.894 18350.080 - 18450.905: 99.4267% ( 2) 00:08:27.894 18450.905 - 18551.729: 99.4519% ( 4) 00:08:27.894 18551.729 - 18652.554: 99.4645% ( 2) 00:08:27.894 18753.378 - 18854.203: 99.4771% ( 2) 00:08:27.894 18854.203 - 18955.028: 99.4960% ( 3) 00:08:27.894 18955.028 - 19055.852: 99.5149% ( 3) 00:08:27.894 19055.852 - 19156.677: 99.5338% ( 3) 00:08:27.894 19156.677 - 19257.502: 99.5590% ( 4) 00:08:27.894 19257.502 - 19358.326: 99.5779% ( 3) 00:08:27.894 19358.326 - 19459.151: 99.5968% ( 3) 00:08:27.894 25811.102 - 26012.751: 99.6094% ( 2) 00:08:27.894 26012.751 - 26214.400: 99.6472% ( 6) 00:08:27.894 26214.400 - 26416.049: 99.6976% ( 8) 00:08:27.894 26416.049 - 26617.698: 99.7669% ( 11) 00:08:27.894 26617.698 - 26819.348: 99.8425% ( 12) 00:08:27.894 26819.348 - 27020.997: 99.9181% ( 12) 00:08:27.894 27020.997 - 27222.646: 99.9748% ( 9) 00:08:27.894 27222.646 - 27424.295: 100.0000% ( 4) 00:08:27.894 00:08:27.894 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:27.894 ============================================================================== 00:08:27.894 Range in us Cumulative IO count 00:08:27.894 4461.489 - 4486.695: 0.0063% ( 1) 00:08:27.894 4486.695 - 4511.902: 0.0189% ( 2) 00:08:27.894 4511.902 - 4537.108: 0.0315% ( 2) 00:08:27.894 4537.108 - 4562.314: 0.0504% ( 3) 00:08:27.894 4562.314 - 4587.520: 0.0630% ( 2) 00:08:27.894 4587.520 - 4612.726: 0.0882% ( 4) 00:08:27.894 4612.726 - 4637.932: 0.1260% ( 6) 00:08:27.894 4637.932 - 4663.138: 0.1638% ( 6) 00:08:27.894 4663.138 - 4688.345: 0.2142% ( 8) 00:08:27.894 4688.345 - 4713.551: 0.2520% ( 6) 00:08:27.894 4713.551 - 4738.757: 0.2898% ( 6) 00:08:27.894 4738.757 - 4763.963: 0.3150% ( 4) 00:08:27.894 4763.963 - 4789.169: 0.3402% ( 4) 00:08:27.894 4789.169 - 4814.375: 0.3591% ( 3) 00:08:27.894 4814.375 - 4839.582: 0.3717% ( 2) 00:08:27.894 4839.582 - 4864.788: 0.3843% ( 2) 00:08:27.894 4864.788 - 4889.994: 0.3969% ( 2) 00:08:27.894 4889.994 - 4915.200: 0.4032% ( 1) 00:08:27.894 6755.249 - 6805.662: 0.4599% ( 9) 00:08:27.894 6805.662 - 6856.074: 0.5229% ( 10) 00:08:27.894 6856.074 - 6906.486: 0.6615% ( 22) 00:08:27.894 6906.486 - 6956.898: 0.9451% ( 45) 00:08:27.894 6956.898 - 7007.311: 1.4302% ( 77) 00:08:27.894 7007.311 - 7057.723: 2.0791% ( 103) 00:08:27.894 7057.723 - 7108.135: 2.9297% ( 135) 00:08:27.894 7108.135 - 7158.548: 4.3158% ( 220) 00:08:27.894 7158.548 - 7208.960: 6.3256% ( 319) 00:08:27.894 7208.960 - 7259.372: 8.7828% ( 390) 00:08:27.894 7259.372 - 7309.785: 12.0716% ( 522) 00:08:27.894 7309.785 - 7360.197: 15.9652% ( 618) 00:08:27.894 7360.197 - 7410.609: 20.4196% ( 707) 00:08:27.894 7410.609 - 7461.022: 25.2520% ( 767) 00:08:27.894 7461.022 - 7511.434: 30.5129% ( 835) 00:08:27.894 7511.434 - 7561.846: 36.1265% ( 891) 00:08:27.894 7561.846 - 7612.258: 41.3432% ( 828) 00:08:27.894 7612.258 - 7662.671: 46.8813% ( 879) 00:08:27.894 7662.671 - 7713.083: 52.3438% ( 867) 00:08:27.894 7713.083 - 7763.495: 57.2896% ( 785) 00:08:27.894 7763.495 - 7813.908: 61.7881% ( 714) 00:08:27.894 7813.908 - 7864.320: 65.6187% ( 608) 00:08:27.894 7864.320 - 7914.732: 68.9516% ( 529) 00:08:27.894 7914.732 - 7965.145: 71.9632% ( 478) 00:08:27.894 7965.145 - 8015.557: 74.6094% ( 420) 00:08:27.894 8015.557 - 8065.969: 76.3483% ( 276) 00:08:27.894 8065.969 - 8116.382: 78.2447% ( 301) 00:08:27.894 8116.382 - 8166.794: 79.5993% ( 215) 00:08:27.894 8166.794 - 8217.206: 80.7460% ( 182) 00:08:27.894 8217.206 - 8267.618: 81.8044% ( 168) 00:08:27.894 8267.618 - 8318.031: 82.7243% ( 146) 00:08:27.894 8318.031 - 8368.443: 83.4362% ( 113) 00:08:27.894 8368.443 - 8418.855: 84.2931% ( 136) 00:08:27.894 8418.855 - 8469.268: 84.9735% ( 108) 00:08:27.894 8469.268 - 8519.680: 85.5469% ( 91) 00:08:27.894 8519.680 - 8570.092: 86.0698% ( 83) 00:08:27.894 8570.092 - 8620.505: 86.5549% ( 77) 00:08:27.894 8620.505 - 8670.917: 87.1220% ( 90) 00:08:27.894 8670.917 - 8721.329: 87.7205% ( 95) 00:08:27.894 8721.329 - 8771.742: 88.2371% ( 82) 00:08:27.894 8771.742 - 8822.154: 88.6782% ( 70) 00:08:27.894 8822.154 - 8872.566: 89.0121% ( 53) 00:08:27.894 8872.566 - 8922.978: 89.3838% ( 59) 00:08:27.894 8922.978 - 8973.391: 89.6295% ( 39) 00:08:27.894 8973.391 - 9023.803: 89.7996% ( 27) 00:08:27.894 9023.803 - 9074.215: 89.9509% ( 24) 00:08:27.894 9074.215 - 9124.628: 90.1525% ( 32) 00:08:27.894 9124.628 - 9175.040: 90.3982% ( 39) 00:08:27.894 9175.040 - 9225.452: 90.5872% ( 30) 00:08:27.894 9225.452 - 9275.865: 90.7573% ( 27) 00:08:27.894 9275.865 - 9326.277: 90.9274% ( 27) 00:08:27.894 9326.277 - 9376.689: 91.1416% ( 34) 00:08:27.894 9376.689 - 9427.102: 91.3810% ( 38) 00:08:27.894 9427.102 - 9477.514: 91.7276% ( 55) 00:08:27.894 9477.514 - 9527.926: 91.9985% ( 43) 00:08:27.894 9527.926 - 9578.338: 92.3450% ( 55) 00:08:27.894 9578.338 - 9628.751: 92.6537% ( 49) 00:08:27.894 9628.751 - 9679.163: 92.8868% ( 37) 00:08:27.894 9679.163 - 9729.575: 93.2145% ( 52) 00:08:27.894 9729.575 - 9779.988: 93.5610% ( 55) 00:08:27.894 9779.988 - 9830.400: 93.8760% ( 50) 00:08:27.894 9830.400 - 9880.812: 94.0650% ( 30) 00:08:27.894 9880.812 - 9931.225: 94.2225% ( 25) 00:08:27.894 9931.225 - 9981.637: 94.4115% ( 30) 00:08:27.894 9981.637 - 10032.049: 94.5439% ( 21) 00:08:27.894 10032.049 - 10082.462: 94.6573% ( 18) 00:08:27.894 10082.462 - 10132.874: 94.7581% ( 16) 00:08:27.894 10132.874 - 10183.286: 94.9471% ( 30) 00:08:27.894 10183.286 - 10233.698: 95.1676% ( 35) 00:08:27.894 10233.698 - 10284.111: 95.4007% ( 37) 00:08:27.894 10284.111 - 10334.523: 95.5771% ( 28) 00:08:27.894 10334.523 - 10384.935: 95.6842% ( 17) 00:08:27.894 10384.935 - 10435.348: 95.7787% ( 15) 00:08:27.894 10435.348 - 10485.760: 95.8732% ( 15) 00:08:27.894 10485.760 - 10536.172: 95.9677% ( 15) 00:08:27.894 10536.172 - 10586.585: 96.0622% ( 15) 00:08:27.894 10586.585 - 10636.997: 96.1505% ( 14) 00:08:27.894 10636.997 - 10687.409: 96.2576% ( 17) 00:08:27.894 10687.409 - 10737.822: 96.3647% ( 17) 00:08:27.894 10737.822 - 10788.234: 96.4466% ( 13) 00:08:27.894 10788.234 - 10838.646: 96.5600% ( 18) 00:08:27.894 10838.646 - 10889.058: 96.6860% ( 20) 00:08:27.894 10889.058 - 10939.471: 96.7805% ( 15) 00:08:27.894 10939.471 - 10989.883: 96.8498% ( 11) 00:08:27.894 10989.883 - 11040.295: 96.9002% ( 8) 00:08:27.894 11040.295 - 11090.708: 97.0451% ( 23) 00:08:27.894 11090.708 - 11141.120: 97.1081% ( 10) 00:08:27.894 11141.120 - 11191.532: 97.1900% ( 13) 00:08:27.894 11191.532 - 11241.945: 97.2845% ( 15) 00:08:27.894 11241.945 - 11292.357: 97.3916% ( 17) 00:08:27.894 11292.357 - 11342.769: 97.4420% ( 8) 00:08:27.894 11342.769 - 11393.182: 97.4861% ( 7) 00:08:27.894 11393.182 - 11443.594: 97.5680% ( 13) 00:08:27.894 11443.594 - 11494.006: 97.6121% ( 7) 00:08:27.894 11494.006 - 11544.418: 97.6752% ( 10) 00:08:27.894 11544.418 - 11594.831: 97.7319% ( 9) 00:08:27.894 11594.831 - 11645.243: 97.8390% ( 17) 00:08:27.894 11645.243 - 11695.655: 97.9209% ( 13) 00:08:27.894 11695.655 - 11746.068: 97.9839% ( 10) 00:08:27.894 11746.068 - 11796.480: 98.0406% ( 9) 00:08:27.895 11796.480 - 11846.892: 98.1036% ( 10) 00:08:27.895 11846.892 - 11897.305: 98.1666% ( 10) 00:08:27.895 11897.305 - 11947.717: 98.1981% ( 5) 00:08:27.895 11947.717 - 11998.129: 98.2296% ( 5) 00:08:27.895 11998.129 - 12048.542: 98.2674% ( 6) 00:08:27.895 12048.542 - 12098.954: 98.2926% ( 4) 00:08:27.895 12098.954 - 12149.366: 98.3304% ( 6) 00:08:27.895 12149.366 - 12199.778: 98.3682% ( 6) 00:08:27.895 12199.778 - 12250.191: 98.3871% ( 3) 00:08:27.895 12351.015 - 12401.428: 98.4123% ( 4) 00:08:27.895 12401.428 - 12451.840: 98.4312% ( 3) 00:08:27.895 12451.840 - 12502.252: 98.4564% ( 4) 00:08:27.895 12502.252 - 12552.665: 98.4753% ( 3) 00:08:27.895 12552.665 - 12603.077: 98.5005% ( 4) 00:08:27.895 12603.077 - 12653.489: 98.5257% ( 4) 00:08:27.895 12653.489 - 12703.902: 98.5509% ( 4) 00:08:27.895 12703.902 - 12754.314: 98.5761% ( 4) 00:08:27.895 12754.314 - 12804.726: 98.5950% ( 3) 00:08:27.895 12804.726 - 12855.138: 98.6202% ( 4) 00:08:27.895 12855.138 - 12905.551: 98.6643% ( 7) 00:08:27.895 12905.551 - 13006.375: 98.7966% ( 21) 00:08:27.895 13006.375 - 13107.200: 98.9667% ( 27) 00:08:27.895 13107.200 - 13208.025: 99.1116% ( 23) 00:08:27.895 13208.025 - 13308.849: 99.1746% ( 10) 00:08:27.895 13308.849 - 13409.674: 99.1935% ( 3) 00:08:27.895 18148.431 - 18249.255: 99.2061% ( 2) 00:08:27.895 18249.255 - 18350.080: 99.2251% ( 3) 00:08:27.895 18350.080 - 18450.905: 99.2566% ( 5) 00:08:27.895 18450.905 - 18551.729: 99.2818% ( 4) 00:08:27.895 18551.729 - 18652.554: 99.3133% ( 5) 00:08:27.895 18652.554 - 18753.378: 99.3322% ( 3) 00:08:27.895 18753.378 - 18854.203: 99.3637% ( 5) 00:08:27.895 18854.203 - 18955.028: 99.3889% ( 4) 00:08:27.895 18955.028 - 19055.852: 99.4204% ( 5) 00:08:27.895 19055.852 - 19156.677: 99.4456% ( 4) 00:08:27.895 19156.677 - 19257.502: 99.4708% ( 4) 00:08:27.895 19257.502 - 19358.326: 99.4960% ( 4) 00:08:27.895 19358.326 - 19459.151: 99.5212% ( 4) 00:08:27.895 19459.151 - 19559.975: 99.5401% ( 3) 00:08:27.895 19559.975 - 19660.800: 99.5590% ( 3) 00:08:27.895 19660.800 - 19761.625: 99.5779% ( 3) 00:08:27.895 19761.625 - 19862.449: 99.5968% ( 3) 00:08:27.895 25206.154 - 25306.978: 99.6157% ( 3) 00:08:27.895 25306.978 - 25407.803: 99.6661% ( 8) 00:08:27.895 25407.803 - 25508.628: 99.7732% ( 17) 00:08:27.895 25508.628 - 25609.452: 99.7795% ( 1) 00:08:27.895 25710.277 - 25811.102: 99.7984% ( 3) 00:08:27.895 25811.102 - 26012.751: 99.8614% ( 10) 00:08:27.895 26012.751 - 26214.400: 99.9370% ( 12) 00:08:27.895 26214.400 - 26416.049: 100.0000% ( 10) 00:08:27.895 00:08:27.895 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:27.895 ============================================================================== 00:08:27.895 Range in us Cumulative IO count 00:08:27.895 3755.717 - 3780.923: 0.0189% ( 3) 00:08:27.895 3780.923 - 3806.129: 0.0378% ( 3) 00:08:27.895 3806.129 - 3831.335: 0.0756% ( 6) 00:08:27.895 3831.335 - 3856.542: 0.1764% ( 16) 00:08:27.895 3856.542 - 3881.748: 0.2205% ( 7) 00:08:27.895 3881.748 - 3906.954: 0.2583% ( 6) 00:08:27.895 3906.954 - 3932.160: 0.2772% ( 3) 00:08:27.895 3932.160 - 3957.366: 0.2898% ( 2) 00:08:27.895 3957.366 - 3982.572: 0.3024% ( 2) 00:08:27.895 3982.572 - 4007.778: 0.3150% ( 2) 00:08:27.895 4007.778 - 4032.985: 0.3339% ( 3) 00:08:27.895 4032.985 - 4058.191: 0.3465% ( 2) 00:08:27.895 4058.191 - 4083.397: 0.3591% ( 2) 00:08:27.895 4083.397 - 4108.603: 0.3717% ( 2) 00:08:27.895 4108.603 - 4133.809: 0.3906% ( 3) 00:08:27.895 4133.809 - 4159.015: 0.4032% ( 2) 00:08:27.895 6200.714 - 6225.920: 0.4095% ( 1) 00:08:27.895 6225.920 - 6251.126: 0.4221% ( 2) 00:08:27.895 6251.126 - 6276.332: 0.4662% ( 7) 00:08:27.895 6276.332 - 6301.538: 0.5166% ( 8) 00:08:27.895 6301.538 - 6326.745: 0.5733% ( 9) 00:08:27.895 6326.745 - 6351.951: 0.6426% ( 11) 00:08:27.895 6351.951 - 6377.157: 0.6678% ( 4) 00:08:27.895 6377.157 - 6402.363: 0.6867% ( 3) 00:08:27.895 6402.363 - 6427.569: 0.6993% ( 2) 00:08:27.895 6427.569 - 6452.775: 0.7119% ( 2) 00:08:27.895 6452.775 - 6503.188: 0.7371% ( 4) 00:08:27.895 6503.188 - 6553.600: 0.7686% ( 5) 00:08:27.895 6553.600 - 6604.012: 0.7939% ( 4) 00:08:27.895 6604.012 - 6654.425: 0.8065% ( 2) 00:08:27.895 6755.249 - 6805.662: 0.8569% ( 8) 00:08:27.895 6805.662 - 6856.074: 0.9451% ( 14) 00:08:27.895 6856.074 - 6906.486: 1.0396% ( 15) 00:08:27.895 6906.486 - 6956.898: 1.3168% ( 44) 00:08:27.895 6956.898 - 7007.311: 1.7137% ( 63) 00:08:27.895 7007.311 - 7057.723: 2.4635% ( 119) 00:08:27.895 7057.723 - 7108.135: 3.5597% ( 174) 00:08:27.895 7108.135 - 7158.548: 4.8765% ( 209) 00:08:27.895 7158.548 - 7208.960: 6.8485% ( 313) 00:08:27.895 7208.960 - 7259.372: 9.4065% ( 406) 00:08:27.895 7259.372 - 7309.785: 12.8780% ( 551) 00:08:27.895 7309.785 - 7360.197: 16.9544% ( 647) 00:08:27.895 7360.197 - 7410.609: 21.5033% ( 722) 00:08:27.895 7410.609 - 7461.022: 26.3294% ( 766) 00:08:27.895 7461.022 - 7511.434: 31.7918% ( 867) 00:08:27.895 7511.434 - 7561.846: 37.1535% ( 851) 00:08:27.895 7561.846 - 7612.258: 42.4899% ( 847) 00:08:27.895 7612.258 - 7662.671: 47.5869% ( 809) 00:08:27.895 7662.671 - 7713.083: 52.4194% ( 767) 00:08:27.895 7713.083 - 7763.495: 57.1636% ( 753) 00:08:27.895 7763.495 - 7813.908: 61.5045% ( 689) 00:08:27.895 7813.908 - 7864.320: 65.4675% ( 629) 00:08:27.895 7864.320 - 7914.732: 69.0209% ( 564) 00:08:27.895 7914.732 - 7965.145: 72.0010% ( 473) 00:08:27.895 7965.145 - 8015.557: 74.4834% ( 394) 00:08:27.895 8015.557 - 8065.969: 76.6507% ( 344) 00:08:27.895 8065.969 - 8116.382: 78.3644% ( 272) 00:08:27.895 8116.382 - 8166.794: 79.5174% ( 183) 00:08:27.895 8166.794 - 8217.206: 80.5318% ( 161) 00:08:27.895 8217.206 - 8267.618: 81.5776% ( 166) 00:08:27.895 8267.618 - 8318.031: 82.4534% ( 139) 00:08:27.895 8318.031 - 8368.443: 83.4110% ( 152) 00:08:27.895 8368.443 - 8418.855: 84.3624% ( 151) 00:08:27.895 8418.855 - 8469.268: 85.1058% ( 118) 00:08:27.895 8469.268 - 8519.680: 85.7863% ( 108) 00:08:27.895 8519.680 - 8570.092: 86.5297% ( 118) 00:08:27.895 8570.092 - 8620.505: 86.9834% ( 72) 00:08:27.895 8620.505 - 8670.917: 87.2984% ( 50) 00:08:27.895 8670.917 - 8721.329: 87.6827% ( 61) 00:08:27.895 8721.329 - 8771.742: 88.0166% ( 53) 00:08:27.895 8771.742 - 8822.154: 88.2497% ( 37) 00:08:27.895 8822.154 - 8872.566: 88.6467% ( 63) 00:08:27.895 8872.566 - 8922.978: 89.0625% ( 66) 00:08:27.895 8922.978 - 8973.391: 89.3775% ( 50) 00:08:27.895 8973.391 - 9023.803: 89.7429% ( 58) 00:08:27.895 9023.803 - 9074.215: 90.1651% ( 67) 00:08:27.895 9074.215 - 9124.628: 90.5305% ( 58) 00:08:27.896 9124.628 - 9175.040: 90.7195% ( 30) 00:08:27.896 9175.040 - 9225.452: 90.9211% ( 32) 00:08:27.896 9225.452 - 9275.865: 91.1290% ( 33) 00:08:27.896 9275.865 - 9326.277: 91.3936% ( 42) 00:08:27.896 9326.277 - 9376.689: 91.6142% ( 35) 00:08:27.896 9376.689 - 9427.102: 91.9481% ( 53) 00:08:27.896 9427.102 - 9477.514: 92.1749% ( 36) 00:08:27.896 9477.514 - 9527.926: 92.4206% ( 39) 00:08:27.896 9527.926 - 9578.338: 92.6726% ( 40) 00:08:27.896 9578.338 - 9628.751: 93.0192% ( 55) 00:08:27.896 9628.751 - 9679.163: 93.3216% ( 48) 00:08:27.896 9679.163 - 9729.575: 93.6492% ( 52) 00:08:27.896 9729.575 - 9779.988: 93.8634% ( 34) 00:08:27.896 9779.988 - 9830.400: 93.9642% ( 16) 00:08:27.896 9830.400 - 9880.812: 94.0524% ( 14) 00:08:27.896 9880.812 - 9931.225: 94.1406% ( 14) 00:08:27.896 9931.225 - 9981.637: 94.2351% ( 15) 00:08:27.896 9981.637 - 10032.049: 94.3611% ( 20) 00:08:27.896 10032.049 - 10082.462: 94.5186% ( 25) 00:08:27.896 10082.462 - 10132.874: 94.6636% ( 23) 00:08:27.896 10132.874 - 10183.286: 94.8148% ( 24) 00:08:27.896 10183.286 - 10233.698: 94.9723% ( 25) 00:08:27.896 10233.698 - 10284.111: 95.1550% ( 29) 00:08:27.896 10284.111 - 10334.523: 95.3377% ( 29) 00:08:27.896 10334.523 - 10384.935: 95.4322% ( 15) 00:08:27.896 10384.935 - 10435.348: 95.5141% ( 13) 00:08:27.896 10435.348 - 10485.760: 95.6023% ( 14) 00:08:27.896 10485.760 - 10536.172: 95.7220% ( 19) 00:08:27.896 10536.172 - 10586.585: 95.8669% ( 23) 00:08:27.896 10586.585 - 10636.997: 96.0874% ( 35) 00:08:27.896 10636.997 - 10687.409: 96.1694% ( 13) 00:08:27.896 10687.409 - 10737.822: 96.2702% ( 16) 00:08:27.896 10737.822 - 10788.234: 96.3647% ( 15) 00:08:27.896 10788.234 - 10838.646: 96.4340% ( 11) 00:08:27.896 10838.646 - 10889.058: 96.5033% ( 11) 00:08:27.896 10889.058 - 10939.471: 96.5663% ( 10) 00:08:27.896 10939.471 - 10989.883: 96.6419% ( 12) 00:08:27.896 10989.883 - 11040.295: 96.6734% ( 5) 00:08:27.896 11040.295 - 11090.708: 96.7301% ( 9) 00:08:27.896 11090.708 - 11141.120: 96.7931% ( 10) 00:08:27.896 11141.120 - 11191.532: 96.8687% ( 12) 00:08:27.896 11191.532 - 11241.945: 96.9569% ( 14) 00:08:27.896 11241.945 - 11292.357: 97.0325% ( 12) 00:08:27.896 11292.357 - 11342.769: 97.0955% ( 10) 00:08:27.896 11342.769 - 11393.182: 97.1711% ( 12) 00:08:27.896 11393.182 - 11443.594: 97.2656% ( 15) 00:08:27.896 11443.594 - 11494.006: 97.3412% ( 12) 00:08:27.896 11494.006 - 11544.418: 97.4168% ( 12) 00:08:27.896 11544.418 - 11594.831: 97.4987% ( 13) 00:08:27.896 11594.831 - 11645.243: 97.6121% ( 18) 00:08:27.896 11645.243 - 11695.655: 97.7634% ( 24) 00:08:27.896 11695.655 - 11746.068: 97.8705% ( 17) 00:08:27.896 11746.068 - 11796.480: 98.0154% ( 23) 00:08:27.896 11796.480 - 11846.892: 98.1414% ( 20) 00:08:27.896 11846.892 - 11897.305: 98.2296% ( 14) 00:08:27.896 11897.305 - 11947.717: 98.2800% ( 8) 00:08:27.896 11947.717 - 11998.129: 98.3052% ( 4) 00:08:27.896 11998.129 - 12048.542: 98.3178% ( 2) 00:08:27.896 12048.542 - 12098.954: 98.3304% ( 2) 00:08:27.896 12098.954 - 12149.366: 98.3619% ( 5) 00:08:27.896 12149.366 - 12199.778: 98.3934% ( 5) 00:08:27.896 12199.778 - 12250.191: 98.4186% ( 4) 00:08:27.896 12250.191 - 12300.603: 98.4564% ( 6) 00:08:27.896 12300.603 - 12351.015: 98.4816% ( 4) 00:08:27.896 12351.015 - 12401.428: 98.5131% ( 5) 00:08:27.896 12401.428 - 12451.840: 98.5383% ( 4) 00:08:27.896 12451.840 - 12502.252: 98.5572% ( 3) 00:08:27.896 12502.252 - 12552.665: 98.5761% ( 3) 00:08:27.896 12552.665 - 12603.077: 98.5950% ( 3) 00:08:27.896 12603.077 - 12653.489: 98.6139% ( 3) 00:08:27.896 12653.489 - 12703.902: 98.6328% ( 3) 00:08:27.896 12703.902 - 12754.314: 98.6580% ( 4) 00:08:27.896 12754.314 - 12804.726: 98.6769% ( 3) 00:08:27.896 12804.726 - 12855.138: 98.7021% ( 4) 00:08:27.896 12855.138 - 12905.551: 98.7210% ( 3) 00:08:27.896 12905.551 - 13006.375: 98.7966% ( 12) 00:08:27.896 13006.375 - 13107.200: 98.8911% ( 15) 00:08:27.896 13107.200 - 13208.025: 98.9478% ( 9) 00:08:27.896 13208.025 - 13308.849: 99.0675% ( 19) 00:08:27.896 13308.849 - 13409.674: 99.1116% ( 7) 00:08:27.896 13409.674 - 13510.498: 99.1431% ( 5) 00:08:27.896 13510.498 - 13611.323: 99.1872% ( 7) 00:08:27.896 13611.323 - 13712.148: 99.1935% ( 1) 00:08:27.896 18249.255 - 18350.080: 99.2124% ( 3) 00:08:27.896 18350.080 - 18450.905: 99.2377% ( 4) 00:08:27.896 18450.905 - 18551.729: 99.2629% ( 4) 00:08:27.896 18551.729 - 18652.554: 99.2944% ( 5) 00:08:27.896 18652.554 - 18753.378: 99.3196% ( 4) 00:08:27.896 18753.378 - 18854.203: 99.3574% ( 6) 00:08:27.896 18854.203 - 18955.028: 99.3763% ( 3) 00:08:27.896 18955.028 - 19055.852: 99.4015% ( 4) 00:08:27.896 19055.852 - 19156.677: 99.4267% ( 4) 00:08:27.896 19156.677 - 19257.502: 99.4519% ( 4) 00:08:27.896 19257.502 - 19358.326: 99.4771% ( 4) 00:08:27.896 19358.326 - 19459.151: 99.5023% ( 4) 00:08:27.896 19459.151 - 19559.975: 99.5212% ( 3) 00:08:27.896 19559.975 - 19660.800: 99.5464% ( 4) 00:08:27.896 19660.800 - 19761.625: 99.5653% ( 3) 00:08:27.896 19761.625 - 19862.449: 99.5842% ( 3) 00:08:27.896 19862.449 - 19963.274: 99.5968% ( 2) 00:08:27.896 25407.803 - 25508.628: 99.6031% ( 1) 00:08:27.896 25508.628 - 25609.452: 99.6157% ( 2) 00:08:27.896 25609.452 - 25710.277: 99.6220% ( 1) 00:08:27.896 25710.277 - 25811.102: 99.6346% ( 2) 00:08:27.896 25811.102 - 26012.751: 99.6787% ( 7) 00:08:27.896 26012.751 - 26214.400: 99.7417% ( 10) 00:08:27.896 26214.400 - 26416.049: 99.7732% ( 5) 00:08:27.896 26416.049 - 26617.698: 99.8299% ( 9) 00:08:27.896 26617.698 - 26819.348: 99.8992% ( 11) 00:08:27.896 26819.348 - 27020.997: 99.9748% ( 12) 00:08:27.896 27020.997 - 27222.646: 100.0000% ( 4) 00:08:27.896 00:08:27.896 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:27.896 ============================================================================== 00:08:27.896 Range in us Cumulative IO count 00:08:27.896 3453.243 - 3478.449: 0.0063% ( 1) 00:08:27.896 3478.449 - 3503.655: 0.0252% ( 3) 00:08:27.896 3503.655 - 3528.862: 0.0504% ( 4) 00:08:27.896 3528.862 - 3554.068: 0.0756% ( 4) 00:08:27.896 3554.068 - 3579.274: 0.1008% ( 4) 00:08:27.896 3579.274 - 3604.480: 0.1638% ( 10) 00:08:27.896 3604.480 - 3629.686: 0.2142% ( 8) 00:08:27.896 3629.686 - 3654.892: 0.2268% ( 2) 00:08:27.896 3654.892 - 3680.098: 0.2394% ( 2) 00:08:27.896 3680.098 - 3705.305: 0.2583% ( 3) 00:08:27.896 3705.305 - 3730.511: 0.2709% ( 2) 00:08:27.896 3730.511 - 3755.717: 0.2835% ( 2) 00:08:27.896 3755.717 - 3780.923: 0.2961% ( 2) 00:08:27.896 3780.923 - 3806.129: 0.3150% ( 3) 00:08:27.896 3806.129 - 3831.335: 0.3276% ( 2) 00:08:27.896 3831.335 - 3856.542: 0.3402% ( 2) 00:08:27.896 3856.542 - 3881.748: 0.3591% ( 3) 00:08:27.896 3881.748 - 3906.954: 0.3717% ( 2) 00:08:27.896 3932.160 - 3957.366: 0.3843% ( 2) 00:08:27.896 3957.366 - 3982.572: 0.3969% ( 2) 00:08:27.896 3982.572 - 4007.778: 0.4032% ( 1) 00:08:27.896 5873.034 - 5898.240: 0.4095% ( 1) 00:08:27.896 5923.446 - 5948.652: 0.4473% ( 6) 00:08:27.896 5948.652 - 5973.858: 0.4851% ( 6) 00:08:27.896 5973.858 - 5999.065: 0.5607% ( 12) 00:08:27.896 5999.065 - 6024.271: 0.6300% ( 11) 00:08:27.896 6024.271 - 6049.477: 0.6678% ( 6) 00:08:27.896 6049.477 - 6074.683: 0.6867% ( 3) 00:08:27.896 6074.683 - 6099.889: 0.6993% ( 2) 00:08:27.896 6099.889 - 6125.095: 0.7119% ( 2) 00:08:27.896 6125.095 - 6150.302: 0.7308% ( 3) 00:08:27.896 6150.302 - 6175.508: 0.7434% ( 2) 00:08:27.896 6175.508 - 6200.714: 0.7623% ( 3) 00:08:27.896 6200.714 - 6225.920: 0.7749% ( 2) 00:08:27.896 6225.920 - 6251.126: 0.7939% ( 3) 00:08:27.896 6251.126 - 6276.332: 0.8065% ( 2) 00:08:27.896 6654.425 - 6704.837: 0.8191% ( 2) 00:08:27.896 6704.837 - 6755.249: 0.8443% ( 4) 00:08:27.896 6755.249 - 6805.662: 0.8821% ( 6) 00:08:27.896 6805.662 - 6856.074: 0.9451% ( 10) 00:08:27.896 6856.074 - 6906.486: 1.1719% ( 36) 00:08:27.896 6906.486 - 6956.898: 1.5121% ( 54) 00:08:27.897 6956.898 - 7007.311: 2.0413% ( 84) 00:08:27.897 7007.311 - 7057.723: 2.7470% ( 112) 00:08:27.897 7057.723 - 7108.135: 3.6479% ( 143) 00:08:27.897 7108.135 - 7158.548: 4.9584% ( 208) 00:08:27.897 7158.548 - 7208.960: 6.9556% ( 317) 00:08:27.897 7208.960 - 7259.372: 9.5136% ( 406) 00:08:27.897 7259.372 - 7309.785: 12.6008% ( 490) 00:08:27.897 7309.785 - 7360.197: 16.1983% ( 571) 00:08:27.897 7360.197 - 7410.609: 20.6842% ( 712) 00:08:27.897 7410.609 - 7461.022: 25.5544% ( 773) 00:08:27.897 7461.022 - 7511.434: 30.5633% ( 795) 00:08:27.897 7511.434 - 7561.846: 36.1769% ( 891) 00:08:27.897 7561.846 - 7612.258: 41.5827% ( 858) 00:08:27.897 7612.258 - 7662.671: 46.2891% ( 747) 00:08:27.897 7662.671 - 7713.083: 51.9027% ( 891) 00:08:27.897 7713.083 - 7763.495: 56.8359% ( 783) 00:08:27.897 7763.495 - 7813.908: 61.2084% ( 694) 00:08:27.897 7813.908 - 7864.320: 65.5431% ( 688) 00:08:27.897 7864.320 - 7914.732: 69.5565% ( 637) 00:08:27.897 7914.732 - 7965.145: 72.5428% ( 474) 00:08:27.897 7965.145 - 8015.557: 74.9685% ( 385) 00:08:27.897 8015.557 - 8065.969: 77.0791% ( 335) 00:08:27.897 8065.969 - 8116.382: 78.7424% ( 264) 00:08:27.897 8116.382 - 8166.794: 80.0592% ( 209) 00:08:27.897 8166.794 - 8217.206: 81.2122% ( 183) 00:08:27.897 8217.206 - 8267.618: 82.1006% ( 141) 00:08:27.897 8267.618 - 8318.031: 82.8755% ( 123) 00:08:27.897 8318.031 - 8368.443: 83.4929% ( 98) 00:08:27.897 8368.443 - 8418.855: 84.1356% ( 102) 00:08:27.897 8418.855 - 8469.268: 84.7656% ( 100) 00:08:27.897 8469.268 - 8519.680: 85.2886% ( 83) 00:08:27.897 8519.680 - 8570.092: 86.0635% ( 123) 00:08:27.897 8570.092 - 8620.505: 86.4856% ( 67) 00:08:27.897 8620.505 - 8670.917: 87.0527% ( 90) 00:08:27.897 8670.917 - 8721.329: 87.4370% ( 61) 00:08:27.897 8721.329 - 8771.742: 87.8339% ( 63) 00:08:27.897 8771.742 - 8822.154: 88.1993% ( 58) 00:08:27.897 8822.154 - 8872.566: 88.5837% ( 61) 00:08:27.897 8872.566 - 8922.978: 88.7979% ( 34) 00:08:27.897 8922.978 - 8973.391: 89.0751% ( 44) 00:08:27.897 8973.391 - 9023.803: 89.3271% ( 40) 00:08:27.897 9023.803 - 9074.215: 89.6673% ( 54) 00:08:27.897 9074.215 - 9124.628: 90.1273% ( 73) 00:08:27.897 9124.628 - 9175.040: 90.5305% ( 64) 00:08:27.897 9175.040 - 9225.452: 90.9211% ( 62) 00:08:27.897 9225.452 - 9275.865: 91.3306% ( 65) 00:08:27.897 9275.865 - 9326.277: 91.7591% ( 68) 00:08:27.897 9326.277 - 9376.689: 92.3450% ( 93) 00:08:27.897 9376.689 - 9427.102: 92.8049% ( 73) 00:08:27.897 9427.102 - 9477.514: 93.0696% ( 42) 00:08:27.897 9477.514 - 9527.926: 93.3342% ( 42) 00:08:27.897 9527.926 - 9578.338: 93.5043% ( 27) 00:08:27.897 9578.338 - 9628.751: 93.6492% ( 23) 00:08:27.897 9628.751 - 9679.163: 93.7500% ( 16) 00:08:27.897 9679.163 - 9729.575: 93.8445% ( 15) 00:08:27.897 9729.575 - 9779.988: 93.9075% ( 10) 00:08:27.897 9779.988 - 9830.400: 93.9831% ( 12) 00:08:27.897 9830.400 - 9880.812: 94.0650% ( 13) 00:08:27.897 9880.812 - 9931.225: 94.1658% ( 16) 00:08:27.897 9931.225 - 9981.637: 94.4241% ( 41) 00:08:27.897 9981.637 - 10032.049: 94.7014% ( 44) 00:08:27.897 10032.049 - 10082.462: 94.8337% ( 21) 00:08:27.897 10082.462 - 10132.874: 94.9345% ( 16) 00:08:27.897 10132.874 - 10183.286: 95.0164% ( 13) 00:08:27.897 10183.286 - 10233.698: 95.1424% ( 20) 00:08:27.897 10233.698 - 10284.111: 95.2558% ( 18) 00:08:27.897 10284.111 - 10334.523: 95.4133% ( 25) 00:08:27.897 10334.523 - 10384.935: 95.5960% ( 29) 00:08:27.897 10384.935 - 10435.348: 95.7220% ( 20) 00:08:27.897 10435.348 - 10485.760: 95.8291% ( 17) 00:08:27.897 10485.760 - 10536.172: 95.9551% ( 20) 00:08:27.897 10536.172 - 10586.585: 96.0559% ( 16) 00:08:27.897 10586.585 - 10636.997: 96.1631% ( 17) 00:08:27.897 10636.997 - 10687.409: 96.2891% ( 20) 00:08:27.897 10687.409 - 10737.822: 96.3899% ( 16) 00:08:27.897 10737.822 - 10788.234: 96.4655% ( 12) 00:08:27.897 10788.234 - 10838.646: 96.5159% ( 8) 00:08:27.897 10838.646 - 10889.058: 96.5726% ( 9) 00:08:27.897 10889.058 - 10939.471: 96.6419% ( 11) 00:08:27.897 10939.471 - 10989.883: 96.6986% ( 9) 00:08:27.897 10989.883 - 11040.295: 96.7553% ( 9) 00:08:27.897 11040.295 - 11090.708: 96.8435% ( 14) 00:08:27.897 11090.708 - 11141.120: 96.9821% ( 22) 00:08:27.897 11141.120 - 11191.532: 97.0514% ( 11) 00:08:27.897 11191.532 - 11241.945: 97.1018% ( 8) 00:08:27.897 11241.945 - 11292.357: 97.1396% ( 6) 00:08:27.897 11292.357 - 11342.769: 97.1963% ( 9) 00:08:27.897 11342.769 - 11393.182: 97.2593% ( 10) 00:08:27.897 11393.182 - 11443.594: 97.3223% ( 10) 00:08:27.897 11443.594 - 11494.006: 97.4168% ( 15) 00:08:27.897 11494.006 - 11544.418: 97.5113% ( 15) 00:08:27.897 11544.418 - 11594.831: 97.5995% ( 14) 00:08:27.897 11594.831 - 11645.243: 97.7004% ( 16) 00:08:27.897 11645.243 - 11695.655: 97.7634% ( 10) 00:08:27.897 11695.655 - 11746.068: 97.8516% ( 14) 00:08:27.897 11746.068 - 11796.480: 97.9146% ( 10) 00:08:27.897 11796.480 - 11846.892: 97.9650% ( 8) 00:08:27.897 11846.892 - 11897.305: 98.0091% ( 7) 00:08:27.897 11897.305 - 11947.717: 98.0469% ( 6) 00:08:27.897 11947.717 - 11998.129: 98.1162% ( 11) 00:08:27.897 11998.129 - 12048.542: 98.1855% ( 11) 00:08:27.897 12048.542 - 12098.954: 98.2674% ( 13) 00:08:27.897 12098.954 - 12149.366: 98.4564% ( 30) 00:08:27.897 12149.366 - 12199.778: 98.5131% ( 9) 00:08:27.897 12199.778 - 12250.191: 98.5761% ( 10) 00:08:27.897 12250.191 - 12300.603: 98.6265% ( 8) 00:08:27.897 12300.603 - 12351.015: 98.6832% ( 9) 00:08:27.897 12351.015 - 12401.428: 98.7336% ( 8) 00:08:27.897 12401.428 - 12451.840: 98.7525% ( 3) 00:08:27.897 12451.840 - 12502.252: 98.7714% ( 3) 00:08:27.897 12502.252 - 12552.665: 98.7903% ( 3) 00:08:27.897 12754.314 - 12804.726: 98.7966% ( 1) 00:08:27.897 12804.726 - 12855.138: 98.8218% ( 4) 00:08:27.897 12855.138 - 12905.551: 98.8407% ( 3) 00:08:27.897 12905.551 - 13006.375: 98.8785% ( 6) 00:08:27.897 13006.375 - 13107.200: 98.9226% ( 7) 00:08:27.897 13107.200 - 13208.025: 98.9667% ( 7) 00:08:27.897 13208.025 - 13308.849: 99.0045% ( 6) 00:08:27.897 13308.849 - 13409.674: 99.0549% ( 8) 00:08:27.897 13409.674 - 13510.498: 99.0990% ( 7) 00:08:27.897 13510.498 - 13611.323: 99.1431% ( 7) 00:08:27.897 13611.323 - 13712.148: 99.1872% ( 7) 00:08:27.897 13712.148 - 13812.972: 99.1935% ( 1) 00:08:27.897 18450.905 - 18551.729: 99.2251% ( 5) 00:08:27.897 18551.729 - 18652.554: 99.2440% ( 3) 00:08:27.897 18652.554 - 18753.378: 99.2755% ( 5) 00:08:27.897 18753.378 - 18854.203: 99.3070% ( 5) 00:08:27.897 18854.203 - 18955.028: 99.3322% ( 4) 00:08:27.897 18955.028 - 19055.852: 99.3574% ( 4) 00:08:27.897 19055.852 - 19156.677: 99.3826% ( 4) 00:08:27.897 19156.677 - 19257.502: 99.4078% ( 4) 00:08:27.897 19257.502 - 19358.326: 99.4393% ( 5) 00:08:27.897 19358.326 - 19459.151: 99.4645% ( 4) 00:08:27.897 19459.151 - 19559.975: 99.4834% ( 3) 00:08:27.897 19559.975 - 19660.800: 99.5023% ( 3) 00:08:27.897 19660.800 - 19761.625: 99.5212% ( 3) 00:08:27.897 19761.625 - 19862.449: 99.5401% ( 3) 00:08:27.897 19862.449 - 19963.274: 99.5653% ( 4) 00:08:27.897 19963.274 - 20064.098: 99.5842% ( 3) 00:08:27.897 20064.098 - 20164.923: 99.5968% ( 2) 00:08:27.897 25004.505 - 25105.329: 99.6031% ( 1) 00:08:27.897 25105.329 - 25206.154: 99.6220% ( 3) 00:08:27.897 25206.154 - 25306.978: 99.6283% ( 1) 00:08:27.897 25306.978 - 25407.803: 99.6913% ( 10) 00:08:27.897 25609.452 - 25710.277: 99.7102% ( 3) 00:08:27.897 25710.277 - 25811.102: 99.7354% ( 4) 00:08:27.897 25811.102 - 26012.751: 99.7858% ( 8) 00:08:27.897 26012.751 - 26214.400: 99.8992% ( 18) 00:08:27.897 26214.400 - 26416.049: 100.0000% ( 16) 00:08:27.897 00:08:27.897 10:02:55 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:27.897 00:08:27.897 real 0m2.416s 00:08:27.897 user 0m2.141s 00:08:27.897 sys 0m0.164s 00:08:27.897 10:02:55 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.897 10:02:55 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:27.898 ************************************ 00:08:27.898 END TEST nvme_perf 00:08:27.898 ************************************ 00:08:27.898 10:02:55 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:27.898 10:02:55 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:27.898 10:02:55 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.898 10:02:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.898 ************************************ 00:08:27.898 START TEST nvme_hello_world 00:08:27.898 ************************************ 00:08:27.898 10:02:55 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:27.898 Initializing NVMe Controllers 00:08:27.898 Attached to 0000:00:11.0 00:08:27.898 Namespace ID: 1 size: 5GB 00:08:27.898 Attached to 0000:00:13.0 00:08:27.898 Namespace ID: 1 size: 1GB 00:08:27.898 Attached to 0000:00:10.0 00:08:27.898 Namespace ID: 1 size: 6GB 00:08:27.898 Attached to 0000:00:12.0 00:08:27.898 Namespace ID: 1 size: 4GB 00:08:27.898 Namespace ID: 2 size: 4GB 00:08:27.898 Namespace ID: 3 size: 4GB 00:08:27.898 Initialization complete. 00:08:27.898 INFO: using host memory buffer for IO 00:08:27.898 Hello world! 00:08:27.898 INFO: using host memory buffer for IO 00:08:27.898 Hello world! 00:08:27.898 INFO: using host memory buffer for IO 00:08:27.898 Hello world! 00:08:27.898 INFO: using host memory buffer for IO 00:08:27.898 Hello world! 00:08:27.898 INFO: using host memory buffer for IO 00:08:27.898 Hello world! 00:08:27.898 INFO: using host memory buffer for IO 00:08:27.898 Hello world! 00:08:27.898 00:08:27.898 real 0m0.179s 00:08:27.898 user 0m0.056s 00:08:27.898 sys 0m0.077s 00:08:27.898 10:02:56 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.898 10:02:56 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:27.898 ************************************ 00:08:27.898 END TEST nvme_hello_world 00:08:27.898 ************************************ 00:08:27.898 10:02:56 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:27.898 10:02:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.898 10:02:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.898 10:02:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.898 ************************************ 00:08:27.898 START TEST nvme_sgl 00:08:27.898 ************************************ 00:08:27.898 10:02:56 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:27.898 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:27.898 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:27.898 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:28.159 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:28.159 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:28.159 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:28.159 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:28.159 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:28.159 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:28.159 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:28.159 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:28.159 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:28.159 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:28.159 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:28.159 NVMe Readv/Writev Request test 00:08:28.159 Attached to 0000:00:11.0 00:08:28.159 Attached to 0000:00:13.0 00:08:28.159 Attached to 0000:00:10.0 00:08:28.159 Attached to 0000:00:12.0 00:08:28.159 0000:00:11.0: build_io_request_2 test passed 00:08:28.159 0000:00:11.0: build_io_request_4 test passed 00:08:28.159 0000:00:11.0: build_io_request_5 test passed 00:08:28.159 0000:00:11.0: build_io_request_6 test passed 00:08:28.159 0000:00:11.0: build_io_request_7 test passed 00:08:28.159 0000:00:11.0: build_io_request_10 test passed 00:08:28.159 0000:00:10.0: build_io_request_2 test passed 00:08:28.159 0000:00:10.0: build_io_request_4 test passed 00:08:28.159 0000:00:10.0: build_io_request_5 test passed 00:08:28.159 0000:00:10.0: build_io_request_6 test passed 00:08:28.159 0000:00:10.0: build_io_request_7 test passed 00:08:28.159 0000:00:10.0: build_io_request_10 test passed 00:08:28.159 Cleaning up... 00:08:28.159 00:08:28.159 real 0m0.228s 00:08:28.159 user 0m0.104s 00:08:28.159 sys 0m0.076s 00:08:28.159 10:02:56 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.159 10:02:56 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:28.159 ************************************ 00:08:28.159 END TEST nvme_sgl 00:08:28.159 ************************************ 00:08:28.159 10:02:56 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:28.159 10:02:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.159 10:02:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.159 10:02:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.159 ************************************ 00:08:28.159 START TEST nvme_e2edp 00:08:28.159 ************************************ 00:08:28.159 10:02:56 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:28.159 NVMe Write/Read with End-to-End data protection test 00:08:28.159 Attached to 0000:00:11.0 00:08:28.159 Attached to 0000:00:13.0 00:08:28.159 Attached to 0000:00:10.0 00:08:28.159 Attached to 0000:00:12.0 00:08:28.159 Cleaning up... 00:08:28.159 00:08:28.159 real 0m0.155s 00:08:28.159 user 0m0.040s 00:08:28.159 sys 0m0.078s 00:08:28.159 10:02:56 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.159 10:02:56 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:28.159 ************************************ 00:08:28.159 END TEST nvme_e2edp 00:08:28.159 ************************************ 00:08:28.421 10:02:56 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:28.421 10:02:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.421 10:02:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.421 10:02:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.421 ************************************ 00:08:28.421 START TEST nvme_reserve 00:08:28.421 ************************************ 00:08:28.421 10:02:56 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:28.421 ===================================================== 00:08:28.421 NVMe Controller at PCI bus 0, device 17, function 0 00:08:28.421 ===================================================== 00:08:28.421 Reservations: Not Supported 00:08:28.421 ===================================================== 00:08:28.421 NVMe Controller at PCI bus 0, device 19, function 0 00:08:28.421 ===================================================== 00:08:28.421 Reservations: Not Supported 00:08:28.421 ===================================================== 00:08:28.421 NVMe Controller at PCI bus 0, device 16, function 0 00:08:28.421 ===================================================== 00:08:28.421 Reservations: Not Supported 00:08:28.421 ===================================================== 00:08:28.421 NVMe Controller at PCI bus 0, device 18, function 0 00:08:28.421 ===================================================== 00:08:28.421 Reservations: Not Supported 00:08:28.421 Reservation test passed 00:08:28.421 00:08:28.421 real 0m0.158s 00:08:28.421 user 0m0.050s 00:08:28.421 sys 0m0.066s 00:08:28.421 10:02:56 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.421 10:02:56 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:28.421 ************************************ 00:08:28.421 END TEST nvme_reserve 00:08:28.421 ************************************ 00:08:28.421 10:02:56 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:28.421 10:02:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.421 10:02:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.421 10:02:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.421 ************************************ 00:08:28.421 START TEST nvme_err_injection 00:08:28.421 ************************************ 00:08:28.421 10:02:56 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:28.683 NVMe Error Injection test 00:08:28.683 Attached to 0000:00:11.0 00:08:28.683 Attached to 0000:00:13.0 00:08:28.683 Attached to 0000:00:10.0 00:08:28.683 Attached to 0000:00:12.0 00:08:28.683 0000:00:11.0: get features failed as expected 00:08:28.683 0000:00:13.0: get features failed as expected 00:08:28.683 0000:00:10.0: get features failed as expected 00:08:28.683 0000:00:12.0: get features failed as expected 00:08:28.683 0000:00:11.0: get features successfully as expected 00:08:28.683 0000:00:13.0: get features successfully as expected 00:08:28.683 0000:00:10.0: get features successfully as expected 00:08:28.683 0000:00:12.0: get features successfully as expected 00:08:28.683 0000:00:12.0: read failed as expected 00:08:28.683 0000:00:11.0: read failed as expected 00:08:28.683 0000:00:13.0: read failed as expected 00:08:28.683 0000:00:10.0: read failed as expected 00:08:28.683 0000:00:11.0: read successfully as expected 00:08:28.683 0000:00:13.0: read successfully as expected 00:08:28.683 0000:00:10.0: read successfully as expected 00:08:28.683 0000:00:12.0: read successfully as expected 00:08:28.683 Cleaning up... 00:08:28.683 00:08:28.683 real 0m0.191s 00:08:28.683 user 0m0.062s 00:08:28.683 sys 0m0.080s 00:08:28.683 10:02:56 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.683 10:02:56 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:28.683 ************************************ 00:08:28.683 END TEST nvme_err_injection 00:08:28.683 ************************************ 00:08:28.683 10:02:56 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:28.683 10:02:56 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:28.683 10:02:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.683 10:02:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.683 ************************************ 00:08:28.683 START TEST nvme_overhead 00:08:28.683 ************************************ 00:08:28.683 10:02:56 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:30.072 Initializing NVMe Controllers 00:08:30.072 Attached to 0000:00:11.0 00:08:30.072 Attached to 0000:00:13.0 00:08:30.072 Attached to 0000:00:10.0 00:08:30.072 Attached to 0000:00:12.0 00:08:30.072 Initialization complete. Launching workers. 00:08:30.072 submit (in ns) avg, min, max = 11241.1, 9798.5, 57050.8 00:08:30.072 complete (in ns) avg, min, max = 7584.4, 7231.5, 263426.9 00:08:30.072 00:08:30.072 Submit histogram 00:08:30.072 ================ 00:08:30.072 Range in us Cumulative Count 00:08:30.072 9.797 - 9.846: 0.0071% ( 1) 00:08:30.072 9.994 - 10.043: 0.0143% ( 1) 00:08:30.072 10.683 - 10.732: 0.0214% ( 1) 00:08:30.072 10.732 - 10.782: 0.0571% ( 5) 00:08:30.072 10.782 - 10.831: 0.5142% ( 64) 00:08:30.072 10.831 - 10.880: 4.0351% ( 493) 00:08:30.072 10.880 - 10.929: 15.9049% ( 1662) 00:08:30.072 10.929 - 10.978: 34.3165% ( 2578) 00:08:30.072 10.978 - 11.028: 52.3568% ( 2526) 00:08:30.072 11.028 - 11.077: 65.9263% ( 1900) 00:08:30.072 11.077 - 11.126: 74.8322% ( 1247) 00:08:30.072 11.126 - 11.175: 80.8170% ( 838) 00:08:30.072 11.175 - 11.225: 84.9950% ( 585) 00:08:30.072 11.225 - 11.274: 88.2945% ( 462) 00:08:30.072 11.274 - 11.323: 90.0371% ( 244) 00:08:30.072 11.323 - 11.372: 91.1370% ( 154) 00:08:30.072 11.372 - 11.422: 92.0297% ( 125) 00:08:30.072 11.422 - 11.471: 92.6153% ( 82) 00:08:30.072 11.471 - 11.520: 92.9867% ( 52) 00:08:30.072 11.520 - 11.569: 93.2938% ( 43) 00:08:30.072 11.569 - 11.618: 93.6223% ( 46) 00:08:30.072 11.618 - 11.668: 93.8223% ( 28) 00:08:30.072 11.668 - 11.717: 94.0580% ( 33) 00:08:30.072 11.717 - 11.766: 94.2794% ( 31) 00:08:30.072 11.766 - 11.815: 94.5079% ( 32) 00:08:30.072 11.815 - 11.865: 94.6793% ( 24) 00:08:30.072 11.865 - 11.914: 94.9007% ( 31) 00:08:30.072 11.914 - 11.963: 95.1221% ( 31) 00:08:30.072 11.963 - 12.012: 95.3221% ( 28) 00:08:30.072 12.012 - 12.062: 95.4506% ( 18) 00:08:30.072 12.062 - 12.111: 95.5721% ( 17) 00:08:30.072 12.111 - 12.160: 95.6363% ( 9) 00:08:30.072 12.160 - 12.209: 95.7363% ( 14) 00:08:30.072 12.209 - 12.258: 95.8577% ( 17) 00:08:30.072 12.258 - 12.308: 95.9292% ( 10) 00:08:30.072 12.308 - 12.357: 96.0149% ( 12) 00:08:30.072 12.357 - 12.406: 96.0791% ( 9) 00:08:30.072 12.406 - 12.455: 96.1291% ( 7) 00:08:30.072 12.455 - 12.505: 96.1577% ( 4) 00:08:30.072 12.505 - 12.554: 96.2005% ( 6) 00:08:30.072 12.554 - 12.603: 96.2148% ( 2) 00:08:30.072 12.603 - 12.702: 96.2363% ( 3) 00:08:30.072 12.702 - 12.800: 96.2505% ( 2) 00:08:30.072 12.800 - 12.898: 96.2648% ( 2) 00:08:30.072 12.898 - 12.997: 96.3220% ( 8) 00:08:30.072 12.997 - 13.095: 96.4791% ( 22) 00:08:30.072 13.095 - 13.194: 96.7433% ( 37) 00:08:30.072 13.194 - 13.292: 96.9290% ( 26) 00:08:30.072 13.292 - 13.391: 97.1218% ( 27) 00:08:30.072 13.391 - 13.489: 97.3218% ( 28) 00:08:30.072 13.489 - 13.588: 97.4289% ( 15) 00:08:30.072 13.588 - 13.686: 97.5503% ( 17) 00:08:30.072 13.686 - 13.785: 97.6289% ( 11) 00:08:30.072 13.785 - 13.883: 97.6718% ( 6) 00:08:30.072 13.883 - 13.982: 97.6860% ( 2) 00:08:30.072 13.982 - 14.080: 97.7075% ( 3) 00:08:30.072 14.080 - 14.178: 97.7575% ( 7) 00:08:30.072 14.178 - 14.277: 97.7860% ( 4) 00:08:30.072 14.277 - 14.375: 97.8360% ( 7) 00:08:30.073 14.375 - 14.474: 97.8717% ( 5) 00:08:30.073 14.474 - 14.572: 97.8932% ( 3) 00:08:30.073 14.572 - 14.671: 97.9360% ( 6) 00:08:30.073 14.769 - 14.868: 97.9503% ( 2) 00:08:30.073 14.868 - 14.966: 97.9860% ( 5) 00:08:30.073 14.966 - 15.065: 98.0146% ( 4) 00:08:30.073 15.065 - 15.163: 98.0860% ( 10) 00:08:30.073 15.163 - 15.262: 98.1217% ( 5) 00:08:30.073 15.262 - 15.360: 98.1645% ( 6) 00:08:30.073 15.360 - 15.458: 98.2003% ( 5) 00:08:30.073 15.458 - 15.557: 98.2431% ( 6) 00:08:30.073 15.557 - 15.655: 98.2717% ( 4) 00:08:30.073 15.655 - 15.754: 98.3074% ( 5) 00:08:30.073 15.754 - 15.852: 98.3217% ( 2) 00:08:30.073 15.852 - 15.951: 98.3288% ( 1) 00:08:30.073 15.951 - 16.049: 98.3645% ( 5) 00:08:30.073 16.049 - 16.148: 98.3859% ( 3) 00:08:30.073 16.148 - 16.246: 98.4217% ( 5) 00:08:30.073 16.345 - 16.443: 98.4502% ( 4) 00:08:30.073 16.443 - 16.542: 98.4716% ( 3) 00:08:30.073 16.542 - 16.640: 98.5431% ( 10) 00:08:30.073 16.640 - 16.738: 98.6431% ( 14) 00:08:30.073 16.738 - 16.837: 98.7502% ( 15) 00:08:30.073 16.837 - 16.935: 98.8716% ( 17) 00:08:30.073 16.935 - 17.034: 98.9644% ( 13) 00:08:30.073 17.034 - 17.132: 99.0858% ( 17) 00:08:30.073 17.132 - 17.231: 99.1715% ( 12) 00:08:30.073 17.231 - 17.329: 99.3144% ( 20) 00:08:30.073 17.329 - 17.428: 99.3715% ( 8) 00:08:30.073 17.428 - 17.526: 99.3929% ( 3) 00:08:30.073 17.526 - 17.625: 99.4644% ( 10) 00:08:30.073 17.625 - 17.723: 99.5001% ( 5) 00:08:30.073 17.723 - 17.822: 99.5572% ( 8) 00:08:30.073 17.822 - 17.920: 99.5786% ( 3) 00:08:30.073 17.920 - 18.018: 99.6143% ( 5) 00:08:30.073 18.018 - 18.117: 99.6500% ( 5) 00:08:30.073 18.117 - 18.215: 99.6715% ( 3) 00:08:30.073 18.215 - 18.314: 99.6929% ( 3) 00:08:30.073 18.314 - 18.412: 99.7072% ( 2) 00:08:30.073 18.412 - 18.511: 99.7215% ( 2) 00:08:30.073 18.511 - 18.609: 99.7286% ( 1) 00:08:30.073 18.708 - 18.806: 99.7358% ( 1) 00:08:30.073 18.806 - 18.905: 99.7572% ( 3) 00:08:30.073 18.905 - 19.003: 99.7715% ( 2) 00:08:30.073 19.003 - 19.102: 99.7786% ( 1) 00:08:30.073 19.200 - 19.298: 99.7857% ( 1) 00:08:30.073 19.298 - 19.397: 99.8072% ( 3) 00:08:30.073 19.397 - 19.495: 99.8286% ( 3) 00:08:30.073 19.988 - 20.086: 99.8357% ( 1) 00:08:30.073 20.185 - 20.283: 99.8429% ( 1) 00:08:30.073 20.382 - 20.480: 99.8500% ( 1) 00:08:30.073 20.677 - 20.775: 99.8786% ( 4) 00:08:30.073 20.775 - 20.874: 99.8857% ( 1) 00:08:30.073 20.874 - 20.972: 99.8929% ( 1) 00:08:30.073 21.071 - 21.169: 99.9000% ( 1) 00:08:30.073 21.268 - 21.366: 99.9072% ( 1) 00:08:30.073 21.465 - 21.563: 99.9143% ( 1) 00:08:30.073 22.252 - 22.351: 99.9214% ( 1) 00:08:30.073 22.351 - 22.449: 99.9357% ( 2) 00:08:30.073 22.942 - 23.040: 99.9429% ( 1) 00:08:30.073 23.532 - 23.631: 99.9500% ( 1) 00:08:30.073 23.926 - 24.025: 99.9571% ( 1) 00:08:30.073 24.911 - 25.009: 99.9643% ( 1) 00:08:30.073 26.978 - 27.175: 99.9714% ( 1) 00:08:30.073 30.917 - 31.114: 99.9786% ( 1) 00:08:30.073 49.034 - 49.231: 99.9857% ( 1) 00:08:30.073 51.594 - 51.988: 99.9929% ( 1) 00:08:30.073 56.714 - 57.108: 100.0000% ( 1) 00:08:30.073 00:08:30.073 Complete histogram 00:08:30.073 ================== 00:08:30.073 Range in us Cumulative Count 00:08:30.073 7.188 - 7.237: 0.0143% ( 2) 00:08:30.073 7.237 - 7.286: 0.4999% ( 68) 00:08:30.073 7.286 - 7.335: 5.4207% ( 689) 00:08:30.073 7.335 - 7.385: 20.7756% ( 2150) 00:08:30.073 7.385 - 7.434: 44.6079% ( 3337) 00:08:30.073 7.434 - 7.483: 65.3764% ( 2908) 00:08:30.073 7.483 - 7.532: 79.6958% ( 2005) 00:08:30.073 7.532 - 7.582: 88.4017% ( 1219) 00:08:30.073 7.582 - 7.631: 92.5939% ( 587) 00:08:30.073 7.631 - 7.680: 94.7293% ( 299) 00:08:30.073 7.680 - 7.729: 95.6649% ( 131) 00:08:30.073 7.729 - 7.778: 96.2720% ( 85) 00:08:30.073 7.778 - 7.828: 96.6005% ( 46) 00:08:30.073 7.828 - 7.877: 96.7362% ( 19) 00:08:30.073 7.877 - 7.926: 96.8576% ( 17) 00:08:30.073 7.926 - 7.975: 96.9433% ( 12) 00:08:30.073 7.975 - 8.025: 97.1004% ( 22) 00:08:30.073 8.025 - 8.074: 97.4361% ( 47) 00:08:30.073 8.074 - 8.123: 97.7218% ( 40) 00:08:30.073 8.123 - 8.172: 97.9717% ( 35) 00:08:30.073 8.172 - 8.222: 98.1717% ( 28) 00:08:30.073 8.222 - 8.271: 98.2860% ( 16) 00:08:30.073 8.271 - 8.320: 98.3717% ( 12) 00:08:30.073 8.320 - 8.369: 98.4288% ( 8) 00:08:30.073 8.369 - 8.418: 98.4645% ( 5) 00:08:30.073 8.418 - 8.468: 98.4788% ( 2) 00:08:30.073 8.468 - 8.517: 98.4931% ( 2) 00:08:30.073 8.566 - 8.615: 98.5074% ( 2) 00:08:30.073 8.812 - 8.862: 98.5145% ( 1) 00:08:30.073 8.911 - 8.960: 98.5359% ( 3) 00:08:30.073 9.009 - 9.058: 98.5431% ( 1) 00:08:30.073 9.157 - 9.206: 98.5502% ( 1) 00:08:30.073 9.255 - 9.305: 98.5573% ( 1) 00:08:30.073 9.305 - 9.354: 98.5645% ( 1) 00:08:30.073 9.403 - 9.452: 98.5716% ( 1) 00:08:30.073 9.551 - 9.600: 98.5788% ( 1) 00:08:30.073 9.698 - 9.748: 98.5859% ( 1) 00:08:30.073 9.748 - 9.797: 98.5931% ( 1) 00:08:30.073 9.895 - 9.945: 98.6002% ( 1) 00:08:30.073 9.994 - 10.043: 98.6073% ( 1) 00:08:30.073 10.043 - 10.092: 98.6431% ( 5) 00:08:30.073 10.142 - 10.191: 98.6502% ( 1) 00:08:30.073 10.289 - 10.338: 98.6573% ( 1) 00:08:30.073 10.338 - 10.388: 98.6645% ( 1) 00:08:30.073 10.388 - 10.437: 98.6788% ( 2) 00:08:30.073 10.585 - 10.634: 98.6859% ( 1) 00:08:30.073 10.683 - 10.732: 98.6930% ( 1) 00:08:30.073 10.831 - 10.880: 98.7002% ( 1) 00:08:30.073 10.929 - 10.978: 98.7216% ( 3) 00:08:30.073 11.225 - 11.274: 98.7430% ( 3) 00:08:30.073 12.012 - 12.062: 98.7502% ( 1) 00:08:30.073 12.209 - 12.258: 98.7573% ( 1) 00:08:30.073 12.505 - 12.554: 98.7645% ( 1) 00:08:30.073 12.554 - 12.603: 98.7716% ( 1) 00:08:30.073 12.603 - 12.702: 98.7859% ( 2) 00:08:30.073 12.702 - 12.800: 98.8073% ( 3) 00:08:30.073 12.800 - 12.898: 98.8145% ( 1) 00:08:30.073 12.898 - 12.997: 98.8930% ( 11) 00:08:30.073 12.997 - 13.095: 98.9430% ( 7) 00:08:30.073 13.095 - 13.194: 98.9859% ( 6) 00:08:30.073 13.194 - 13.292: 99.0716% ( 12) 00:08:30.073 13.292 - 13.391: 99.1644% ( 13) 00:08:30.073 13.391 - 13.489: 99.2287% ( 9) 00:08:30.073 13.489 - 13.588: 99.3144% ( 12) 00:08:30.073 13.588 - 13.686: 99.3787% ( 9) 00:08:30.073 13.686 - 13.785: 99.4287% ( 7) 00:08:30.073 13.785 - 13.883: 99.5001% ( 10) 00:08:30.073 13.883 - 13.982: 99.5572% ( 8) 00:08:30.073 13.982 - 14.080: 99.5929% ( 5) 00:08:30.073 14.080 - 14.178: 99.6072% ( 2) 00:08:30.073 14.178 - 14.277: 99.6500% ( 6) 00:08:30.073 14.277 - 14.375: 99.6858% ( 5) 00:08:30.073 14.375 - 14.474: 99.7072% ( 3) 00:08:30.073 14.474 - 14.572: 99.7215% ( 2) 00:08:30.073 14.572 - 14.671: 99.7429% ( 3) 00:08:30.073 14.671 - 14.769: 99.7786% ( 5) 00:08:30.073 14.868 - 14.966: 99.7929% ( 2) 00:08:30.073 15.065 - 15.163: 99.8000% ( 1) 00:08:30.073 15.163 - 15.262: 99.8143% ( 2) 00:08:30.073 15.262 - 15.360: 99.8215% ( 1) 00:08:30.073 15.360 - 15.458: 99.8357% ( 2) 00:08:30.073 15.852 - 15.951: 99.8429% ( 1) 00:08:30.073 16.148 - 16.246: 99.8500% ( 1) 00:08:30.073 16.738 - 16.837: 99.8643% ( 2) 00:08:30.073 17.132 - 17.231: 99.8786% ( 2) 00:08:30.073 17.231 - 17.329: 99.9000% ( 3) 00:08:30.073 17.428 - 17.526: 99.9072% ( 1) 00:08:30.073 17.526 - 17.625: 99.9214% ( 2) 00:08:30.073 17.723 - 17.822: 99.9286% ( 1) 00:08:30.073 18.117 - 18.215: 99.9357% ( 1) 00:08:30.074 18.215 - 18.314: 99.9429% ( 1) 00:08:30.074 18.806 - 18.905: 99.9500% ( 1) 00:08:30.074 19.791 - 19.889: 99.9571% ( 1) 00:08:30.074 20.480 - 20.578: 99.9643% ( 1) 00:08:30.074 45.489 - 45.686: 99.9714% ( 1) 00:08:30.074 48.837 - 49.034: 99.9786% ( 1) 00:08:30.074 51.200 - 51.594: 99.9857% ( 1) 00:08:30.074 64.985 - 65.378: 99.9929% ( 1) 00:08:30.074 263.089 - 264.665: 100.0000% ( 1) 00:08:30.074 00:08:30.074 00:08:30.074 real 0m1.167s 00:08:30.074 user 0m1.047s 00:08:30.074 sys 0m0.078s 00:08:30.074 10:02:58 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.074 10:02:58 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:30.074 ************************************ 00:08:30.074 END TEST nvme_overhead 00:08:30.074 ************************************ 00:08:30.074 10:02:58 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:30.074 10:02:58 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:30.074 10:02:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.074 10:02:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.074 ************************************ 00:08:30.074 START TEST nvme_arbitration 00:08:30.074 ************************************ 00:08:30.074 10:02:58 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:33.379 Initializing NVMe Controllers 00:08:33.379 Attached to 0000:00:11.0 00:08:33.379 Attached to 0000:00:13.0 00:08:33.379 Attached to 0000:00:10.0 00:08:33.379 Attached to 0000:00:12.0 00:08:33.379 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:08:33.379 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:08:33.379 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:08:33.379 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:33.379 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:33.379 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:33.379 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:33.379 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:33.379 Initialization complete. Launching workers. 00:08:33.379 Starting thread on core 1 with urgent priority queue 00:08:33.379 Starting thread on core 2 with urgent priority queue 00:08:33.379 Starting thread on core 3 with urgent priority queue 00:08:33.379 Starting thread on core 0 with urgent priority queue 00:08:33.379 QEMU NVMe Ctrl (12341 ) core 0: 6720.00 IO/s 14.88 secs/100000 ios 00:08:33.379 QEMU NVMe Ctrl (12342 ) core 0: 6720.00 IO/s 14.88 secs/100000 ios 00:08:33.379 QEMU NVMe Ctrl (12343 ) core 1: 6656.00 IO/s 15.02 secs/100000 ios 00:08:33.379 QEMU NVMe Ctrl (12342 ) core 1: 6656.00 IO/s 15.02 secs/100000 ios 00:08:33.379 QEMU NVMe Ctrl (12340 ) core 2: 6359.00 IO/s 15.73 secs/100000 ios 00:08:33.379 QEMU NVMe Ctrl (12342 ) core 3: 6229.33 IO/s 16.05 secs/100000 ios 00:08:33.379 ======================================================== 00:08:33.379 00:08:33.379 00:08:33.379 real 0m3.196s 00:08:33.379 user 0m9.021s 00:08:33.379 sys 0m0.093s 00:08:33.379 ************************************ 00:08:33.379 END TEST nvme_arbitration 00:08:33.379 ************************************ 00:08:33.379 10:03:01 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.379 10:03:01 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:33.379 10:03:01 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:33.379 10:03:01 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:33.379 10:03:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.379 10:03:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.379 ************************************ 00:08:33.379 START TEST nvme_single_aen 00:08:33.379 ************************************ 00:08:33.379 10:03:01 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:33.379 Asynchronous Event Request test 00:08:33.379 Attached to 0000:00:11.0 00:08:33.379 Attached to 0000:00:13.0 00:08:33.379 Attached to 0000:00:10.0 00:08:33.379 Attached to 0000:00:12.0 00:08:33.379 Reset controller to setup AER completions for this process 00:08:33.379 Registering asynchronous event callbacks... 00:08:33.379 Getting orig temperature thresholds of all controllers 00:08:33.379 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:33.379 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:33.379 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:33.379 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:33.379 Setting all controllers temperature threshold low to trigger AER 00:08:33.379 Waiting for all controllers temperature threshold to be set lower 00:08:33.379 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:33.379 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:33.379 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:33.380 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:33.380 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:33.380 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:33.380 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:33.380 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:33.380 Waiting for all controllers to trigger AER and reset threshold 00:08:33.380 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.380 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.380 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.380 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.380 Cleaning up... 00:08:33.380 00:08:33.380 real 0m0.175s 00:08:33.380 user 0m0.048s 00:08:33.380 sys 0m0.085s 00:08:33.380 10:03:01 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.380 ************************************ 00:08:33.380 END TEST nvme_single_aen 00:08:33.380 ************************************ 00:08:33.380 10:03:01 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:33.380 10:03:01 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:33.380 10:03:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.380 10:03:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.380 10:03:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.380 ************************************ 00:08:33.380 START TEST nvme_doorbell_aers 00:08:33.380 ************************************ 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:33.380 10:03:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:33.662 [2024-11-03 10:03:01.889347] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:08:43.681 Executing: test_write_invalid_db 00:08:43.681 Waiting for AER completion... 00:08:43.681 Failure: test_write_invalid_db 00:08:43.681 00:08:43.681 Executing: test_invalid_db_write_overflow_sq 00:08:43.681 Waiting for AER completion... 00:08:43.681 Failure: test_invalid_db_write_overflow_sq 00:08:43.681 00:08:43.681 Executing: test_invalid_db_write_overflow_cq 00:08:43.681 Waiting for AER completion... 00:08:43.681 Failure: test_invalid_db_write_overflow_cq 00:08:43.681 00:08:43.681 10:03:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:43.681 10:03:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:43.681 [2024-11-03 10:03:11.923390] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:08:53.713 Executing: test_write_invalid_db 00:08:53.713 Waiting for AER completion... 00:08:53.713 Failure: test_write_invalid_db 00:08:53.713 00:08:53.713 Executing: test_invalid_db_write_overflow_sq 00:08:53.713 Waiting for AER completion... 00:08:53.713 Failure: test_invalid_db_write_overflow_sq 00:08:53.713 00:08:53.713 Executing: test_invalid_db_write_overflow_cq 00:08:53.713 Waiting for AER completion... 00:08:53.713 Failure: test_invalid_db_write_overflow_cq 00:08:53.713 00:08:53.713 10:03:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:53.713 10:03:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:53.713 [2024-11-03 10:03:21.922899] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:03.834 Executing: test_write_invalid_db 00:09:03.834 Waiting for AER completion... 00:09:03.834 Failure: test_write_invalid_db 00:09:03.834 00:09:03.834 Executing: test_invalid_db_write_overflow_sq 00:09:03.834 Waiting for AER completion... 00:09:03.834 Failure: test_invalid_db_write_overflow_sq 00:09:03.834 00:09:03.834 Executing: test_invalid_db_write_overflow_cq 00:09:03.834 Waiting for AER completion... 00:09:03.834 Failure: test_invalid_db_write_overflow_cq 00:09:03.834 00:09:03.834 10:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:03.834 10:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:03.834 [2024-11-03 10:03:31.965741] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 Executing: test_write_invalid_db 00:09:13.809 Waiting for AER completion... 00:09:13.809 Failure: test_write_invalid_db 00:09:13.809 00:09:13.809 Executing: test_invalid_db_write_overflow_sq 00:09:13.809 Waiting for AER completion... 00:09:13.809 Failure: test_invalid_db_write_overflow_sq 00:09:13.809 00:09:13.809 Executing: test_invalid_db_write_overflow_cq 00:09:13.809 Waiting for AER completion... 00:09:13.809 Failure: test_invalid_db_write_overflow_cq 00:09:13.809 00:09:13.809 00:09:13.809 real 0m40.181s 00:09:13.809 user 0m34.209s 00:09:13.809 sys 0m5.619s 00:09:13.809 10:03:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.809 10:03:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:13.809 ************************************ 00:09:13.809 END TEST nvme_doorbell_aers 00:09:13.809 ************************************ 00:09:13.809 10:03:41 nvme -- nvme/nvme.sh@97 -- # uname 00:09:13.809 10:03:41 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:13.809 10:03:41 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:13.809 10:03:41 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:13.809 10:03:41 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.809 10:03:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:13.809 ************************************ 00:09:13.809 START TEST nvme_multi_aen 00:09:13.809 ************************************ 00:09:13.809 10:03:41 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:13.809 [2024-11-03 10:03:42.010357] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.010739] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.010803] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.011931] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.012012] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.012044] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.013000] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.013066] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.013097] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.013994] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.014057] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 [2024-11-03 10:03:42.014087] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75102) is not found. Dropping the request. 00:09:13.809 Child process pid: 75622 00:09:13.809 [Child] Asynchronous Event Request test 00:09:13.809 [Child] Attached to 0000:00:11.0 00:09:13.809 [Child] Attached to 0000:00:13.0 00:09:13.809 [Child] Attached to 0000:00:10.0 00:09:13.809 [Child] Attached to 0000:00:12.0 00:09:13.809 [Child] Registering asynchronous event callbacks... 00:09:13.809 [Child] Getting orig temperature thresholds of all controllers 00:09:13.809 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:13.809 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:13.809 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:13.809 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:13.809 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:13.809 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:13.809 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:13.809 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:13.809 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:13.809 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.809 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.809 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.809 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:13.809 [Child] Cleaning up... 00:09:14.067 Asynchronous Event Request test 00:09:14.067 Attached to 0000:00:11.0 00:09:14.067 Attached to 0000:00:13.0 00:09:14.068 Attached to 0000:00:10.0 00:09:14.068 Attached to 0000:00:12.0 00:09:14.068 Reset controller to setup AER completions for this process 00:09:14.068 Registering asynchronous event callbacks... 00:09:14.068 Getting orig temperature thresholds of all controllers 00:09:14.068 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.068 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.068 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.068 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.068 Setting all controllers temperature threshold low to trigger AER 00:09:14.068 Waiting for all controllers temperature threshold to be set lower 00:09:14.068 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.068 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:14.068 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.068 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:14.068 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.068 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:14.068 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.068 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:14.068 Waiting for all controllers to trigger AER and reset threshold 00:09:14.068 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.068 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.068 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.068 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.068 Cleaning up... 00:09:14.068 00:09:14.068 real 0m0.329s 00:09:14.068 user 0m0.100s 00:09:14.068 sys 0m0.140s 00:09:14.068 10:03:42 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.068 10:03:42 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:14.068 ************************************ 00:09:14.068 END TEST nvme_multi_aen 00:09:14.068 ************************************ 00:09:14.068 10:03:42 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:14.068 10:03:42 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:14.068 10:03:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.068 10:03:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.068 ************************************ 00:09:14.068 START TEST nvme_startup 00:09:14.068 ************************************ 00:09:14.068 10:03:42 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:14.068 Initializing NVMe Controllers 00:09:14.068 Attached to 0000:00:11.0 00:09:14.068 Attached to 0000:00:13.0 00:09:14.068 Attached to 0000:00:10.0 00:09:14.068 Attached to 0000:00:12.0 00:09:14.068 Initialization complete. 00:09:14.068 Time used:117083.961 (us). 00:09:14.068 00:09:14.068 real 0m0.164s 00:09:14.068 user 0m0.049s 00:09:14.068 sys 0m0.071s 00:09:14.068 10:03:42 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.068 10:03:42 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:14.068 ************************************ 00:09:14.068 END TEST nvme_startup 00:09:14.068 ************************************ 00:09:14.327 10:03:42 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:14.327 10:03:42 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:14.327 10:03:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.327 10:03:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.327 ************************************ 00:09:14.327 START TEST nvme_multi_secondary 00:09:14.327 ************************************ 00:09:14.327 10:03:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:14.327 10:03:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75677 00:09:14.327 10:03:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75678 00:09:14.327 10:03:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:14.327 10:03:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:14.327 10:03:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:17.612 Initializing NVMe Controllers 00:09:17.612 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:17.612 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:17.612 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:17.612 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:17.612 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:17.612 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:17.612 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:17.612 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:17.612 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:17.612 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:17.612 Initialization complete. Launching workers. 00:09:17.612 ======================================================== 00:09:17.612 Latency(us) 00:09:17.612 Device Information : IOPS MiB/s Average min max 00:09:17.612 PCIE (0000:00:11.0) NSID 1 from core 2: 2913.07 11.38 5492.06 1505.44 12919.04 00:09:17.612 PCIE (0000:00:13.0) NSID 1 from core 2: 2913.07 11.38 5492.00 1616.03 12943.06 00:09:17.612 PCIE (0000:00:10.0) NSID 1 from core 2: 2913.07 11.38 5491.36 1642.06 12777.08 00:09:17.612 PCIE (0000:00:12.0) NSID 1 from core 2: 2913.07 11.38 5492.93 1600.05 12254.48 00:09:17.612 PCIE (0000:00:12.0) NSID 2 from core 2: 2913.07 11.38 5492.71 1665.71 12629.77 00:09:17.612 PCIE (0000:00:12.0) NSID 3 from core 2: 2913.07 11.38 5493.40 1663.00 13445.01 00:09:17.612 ======================================================== 00:09:17.612 Total : 17478.41 68.28 5492.41 1505.44 13445.01 00:09:17.612 00:09:17.612 10:03:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75677 00:09:17.612 Initializing NVMe Controllers 00:09:17.612 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:17.612 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:17.612 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:17.612 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:17.612 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:17.612 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:17.612 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:17.612 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:17.612 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:17.612 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:17.612 Initialization complete. Launching workers. 00:09:17.612 ======================================================== 00:09:17.612 Latency(us) 00:09:17.612 Device Information : IOPS MiB/s Average min max 00:09:17.612 PCIE (0000:00:11.0) NSID 1 from core 1: 7475.15 29.20 2139.95 1178.66 6602.93 00:09:17.612 PCIE (0000:00:13.0) NSID 1 from core 1: 7475.15 29.20 2140.00 1231.68 6056.58 00:09:17.612 PCIE (0000:00:10.0) NSID 1 from core 1: 7475.15 29.20 2139.05 1234.14 5782.60 00:09:17.612 PCIE (0000:00:12.0) NSID 1 from core 1: 7475.15 29.20 2139.93 1280.37 5750.27 00:09:17.612 PCIE (0000:00:12.0) NSID 2 from core 1: 7475.15 29.20 2139.88 1121.01 5941.41 00:09:17.612 PCIE (0000:00:12.0) NSID 3 from core 1: 7475.15 29.20 2139.85 1016.00 5804.41 00:09:17.612 ======================================================== 00:09:17.612 Total : 44850.87 175.20 2139.78 1016.00 6602.93 00:09:17.612 00:09:19.517 Initializing NVMe Controllers 00:09:19.517 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.517 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.517 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.517 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.517 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:19.517 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:19.517 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:19.517 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:19.517 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:19.517 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:19.517 Initialization complete. Launching workers. 00:09:19.517 ======================================================== 00:09:19.517 Latency(us) 00:09:19.517 Device Information : IOPS MiB/s Average min max 00:09:19.517 PCIE (0000:00:11.0) NSID 1 from core 0: 10967.97 42.84 1458.41 736.79 5489.24 00:09:19.517 PCIE (0000:00:13.0) NSID 1 from core 0: 10967.97 42.84 1458.40 740.45 5147.39 00:09:19.517 PCIE (0000:00:10.0) NSID 1 from core 0: 10967.97 42.84 1457.57 712.94 5450.00 00:09:19.517 PCIE (0000:00:12.0) NSID 1 from core 0: 10967.97 42.84 1458.37 673.32 5355.31 00:09:19.517 PCIE (0000:00:12.0) NSID 2 from core 0: 10967.97 42.84 1458.33 518.39 5230.56 00:09:19.517 PCIE (0000:00:12.0) NSID 3 from core 0: 10967.97 42.84 1458.32 443.56 5773.65 00:09:19.517 ======================================================== 00:09:19.517 Total : 65807.85 257.06 1458.23 443.56 5773.65 00:09:19.517 00:09:19.517 10:03:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75678 00:09:19.517 10:03:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75743 00:09:19.517 10:03:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:19.517 10:03:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75744 00:09:19.517 10:03:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:19.517 10:03:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:22.823 Initializing NVMe Controllers 00:09:22.823 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.823 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:22.823 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:22.823 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:22.823 Initialization complete. Launching workers. 00:09:22.823 ======================================================== 00:09:22.823 Latency(us) 00:09:22.823 Device Information : IOPS MiB/s Average min max 00:09:22.823 PCIE (0000:00:11.0) NSID 1 from core 0: 7675.25 29.98 2084.19 771.79 5922.32 00:09:22.823 PCIE (0000:00:13.0) NSID 1 from core 0: 7675.25 29.98 2084.25 773.57 5670.12 00:09:22.823 PCIE (0000:00:10.0) NSID 1 from core 0: 7675.25 29.98 2083.41 746.68 5783.84 00:09:22.823 PCIE (0000:00:12.0) NSID 1 from core 0: 7675.25 29.98 2084.30 779.87 6057.93 00:09:22.823 PCIE (0000:00:12.0) NSID 2 from core 0: 7675.25 29.98 2084.30 773.32 5825.18 00:09:22.823 PCIE (0000:00:12.0) NSID 3 from core 0: 7675.25 29.98 2084.22 772.55 5509.50 00:09:22.823 ======================================================== 00:09:22.823 Total : 46051.51 179.89 2084.11 746.68 6057.93 00:09:22.823 00:09:22.823 Initializing NVMe Controllers 00:09:22.823 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.823 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:22.823 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:22.823 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:22.823 Initialization complete. Launching workers. 00:09:22.823 ======================================================== 00:09:22.823 Latency(us) 00:09:22.823 Device Information : IOPS MiB/s Average min max 00:09:22.823 PCIE (0000:00:11.0) NSID 1 from core 1: 7572.42 29.58 2112.45 784.81 5995.75 00:09:22.823 PCIE (0000:00:13.0) NSID 1 from core 1: 7572.42 29.58 2112.43 783.83 5819.76 00:09:22.823 PCIE (0000:00:10.0) NSID 1 from core 1: 7572.42 29.58 2111.43 699.50 6442.85 00:09:22.823 PCIE (0000:00:12.0) NSID 1 from core 1: 7572.42 29.58 2112.31 648.11 6118.82 00:09:22.823 PCIE (0000:00:12.0) NSID 2 from core 1: 7572.42 29.58 2112.23 478.22 6014.40 00:09:22.823 PCIE (0000:00:12.0) NSID 3 from core 1: 7572.42 29.58 2112.19 423.41 5802.14 00:09:22.823 ======================================================== 00:09:22.823 Total : 45434.53 177.48 2112.17 423.41 6442.85 00:09:22.823 00:09:24.726 Initializing NVMe Controllers 00:09:24.726 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:24.726 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:24.726 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:24.726 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:24.726 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:24.726 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:24.726 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:24.726 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:24.726 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:24.726 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:24.726 Initialization complete. Launching workers. 00:09:24.726 ======================================================== 00:09:24.726 Latency(us) 00:09:24.726 Device Information : IOPS MiB/s Average min max 00:09:24.726 PCIE (0000:00:11.0) NSID 1 from core 2: 4542.93 17.75 3521.21 775.96 12339.25 00:09:24.726 PCIE (0000:00:13.0) NSID 1 from core 2: 4542.93 17.75 3521.55 783.31 12101.24 00:09:24.726 PCIE (0000:00:10.0) NSID 1 from core 2: 4542.93 17.75 3519.66 768.43 12237.09 00:09:24.726 PCIE (0000:00:12.0) NSID 1 from core 2: 4542.93 17.75 3521.26 722.63 12591.86 00:09:24.726 PCIE (0000:00:12.0) NSID 2 from core 2: 4542.93 17.75 3521.38 538.27 12236.15 00:09:24.726 PCIE (0000:00:12.0) NSID 3 from core 2: 4542.93 17.75 3521.33 453.04 12408.38 00:09:24.726 ======================================================== 00:09:24.726 Total : 27257.61 106.48 3521.07 453.04 12591.86 00:09:24.726 00:09:24.726 10:03:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75743 00:09:24.726 10:03:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75744 00:09:24.726 00:09:24.726 real 0m10.544s 00:09:24.726 user 0m18.224s 00:09:24.726 sys 0m0.487s 00:09:24.726 10:03:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.726 10:03:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:24.726 ************************************ 00:09:24.726 END TEST nvme_multi_secondary 00:09:24.726 ************************************ 00:09:24.726 10:03:53 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:24.726 10:03:53 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:24.726 10:03:53 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/74704 ]] 00:09:24.726 10:03:53 nvme -- common/autotest_common.sh@1090 -- # kill 74704 00:09:24.726 10:03:53 nvme -- common/autotest_common.sh@1091 -- # wait 74704 00:09:24.726 [2024-11-03 10:03:53.015640] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.015756] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.015789] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.015821] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.016646] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.016730] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.016759] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.016789] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.017631] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.017708] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.017737] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.017773] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.018534] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.018604] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.018629] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.726 [2024-11-03 10:03:53.018657] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75621) is not found. Dropping the request. 00:09:24.986 10:03:53 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:24.986 10:03:53 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:24.986 10:03:53 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:24.986 10:03:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.986 10:03:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.986 10:03:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.986 ************************************ 00:09:24.986 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:24.986 ************************************ 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:24.986 * Looking for test storage... 00:09:24.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.986 --rc genhtml_branch_coverage=1 00:09:24.986 --rc genhtml_function_coverage=1 00:09:24.986 --rc genhtml_legend=1 00:09:24.986 --rc geninfo_all_blocks=1 00:09:24.986 --rc geninfo_unexecuted_blocks=1 00:09:24.986 00:09:24.986 ' 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.986 --rc genhtml_branch_coverage=1 00:09:24.986 --rc genhtml_function_coverage=1 00:09:24.986 --rc genhtml_legend=1 00:09:24.986 --rc geninfo_all_blocks=1 00:09:24.986 --rc geninfo_unexecuted_blocks=1 00:09:24.986 00:09:24.986 ' 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.986 --rc genhtml_branch_coverage=1 00:09:24.986 --rc genhtml_function_coverage=1 00:09:24.986 --rc genhtml_legend=1 00:09:24.986 --rc geninfo_all_blocks=1 00:09:24.986 --rc geninfo_unexecuted_blocks=1 00:09:24.986 00:09:24.986 ' 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.986 --rc genhtml_branch_coverage=1 00:09:24.986 --rc genhtml_function_coverage=1 00:09:24.986 --rc genhtml_legend=1 00:09:24.986 --rc geninfo_all_blocks=1 00:09:24.986 --rc geninfo_unexecuted_blocks=1 00:09:24.986 00:09:24.986 ' 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:24.986 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75910 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75910 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 75910 ']' 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.987 10:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.246 [2024-11-03 10:03:53.374256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:25.246 [2024-11-03 10:03:53.374371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75910 ] 00:09:25.246 [2024-11-03 10:03:53.518042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.246 [2024-11-03 10:03:53.552781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.246 [2024-11-03 10:03:53.552982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.246 [2024-11-03 10:03:53.553299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.246 [2024-11-03 10:03:53.553327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.813 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.813 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:25.813 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:25.813 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.813 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:26.072 nvme0n1 00:09:26.072 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_mQWRI.txt 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:26.073 true 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730628234 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75927 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:26.073 10:03:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 [2024-11-03 10:03:56.263197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:27.975 [2024-11-03 10:03:56.263614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:27.975 [2024-11-03 10:03:56.263647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:27.975 [2024-11-03 10:03:56.263661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:27.975 [2024-11-03 10:03:56.265051] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.975 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75927 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75927 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75927 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:27.975 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_mQWRI.txt 00:09:27.976 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:27.976 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:27.976 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:27.976 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_mQWRI.txt 00:09:28.234 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75910 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 75910 ']' 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 75910 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75910 00:09:28.235 killing process with pid 75910 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75910' 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 75910 00:09:28.235 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 75910 00:09:28.494 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:28.494 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:28.494 00:09:28.494 real 0m3.493s 00:09:28.494 user 0m12.496s 00:09:28.494 sys 0m0.446s 00:09:28.494 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.494 10:03:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:28.494 ************************************ 00:09:28.494 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:28.494 ************************************ 00:09:28.494 10:03:56 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:28.494 10:03:56 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:28.494 10:03:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.494 10:03:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.494 10:03:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.494 ************************************ 00:09:28.494 START TEST nvme_fio 00:09:28.494 ************************************ 00:09:28.494 10:03:56 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:28.494 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:28.494 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:28.494 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:28.494 10:03:56 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:28.494 10:03:56 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:28.494 10:03:56 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.494 10:03:56 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.494 10:03:56 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:28.494 10:03:56 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:28.494 10:03:56 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.494 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:28.494 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:28.494 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:28.494 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:28.494 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:28.753 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:28.753 10:03:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:28.753 10:03:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:28.753 10:03:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:28.753 10:03:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:29.012 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:29.012 fio-3.35 00:09:29.012 Starting 1 thread 00:09:34.280 00:09:34.280 test: (groupid=0, jobs=1): err= 0: pid=76056: Sun Nov 3 10:04:02 2024 00:09:34.280 read: IOPS=20.6k, BW=80.5MiB/s (84.4MB/s)(161MiB/2001msec) 00:09:34.280 slat (nsec): min=3343, max=89761, avg=5140.04, stdev=2471.29 00:09:34.280 clat (usec): min=218, max=13829, avg=3093.49, stdev=1100.45 00:09:34.280 lat (usec): min=222, max=13900, avg=3098.63, stdev=1101.65 00:09:34.280 clat percentiles (usec): 00:09:34.280 | 1.00th=[ 1827], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:34.280 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2835], 00:09:34.280 | 70.00th=[ 3064], 80.00th=[ 3589], 90.00th=[ 4752], 95.00th=[ 5538], 00:09:34.280 | 99.00th=[ 7046], 99.50th=[ 7373], 99.90th=[ 9110], 99.95th=[11076], 00:09:34.280 | 99.99th=[13304] 00:09:34.280 bw ( KiB/s): min=72672, max=86088, per=97.29%, avg=80160.00, stdev=6842.69, samples=3 00:09:34.280 iops : min=18168, max=21522, avg=20040.00, stdev=1710.67, samples=3 00:09:34.280 write: IOPS=20.5k, BW=80.2MiB/s (84.1MB/s)(160MiB/2001msec); 0 zone resets 00:09:34.280 slat (usec): min=3, max=104, avg= 5.39, stdev= 2.67 00:09:34.280 clat (usec): min=226, max=13695, avg=3109.26, stdev=1106.95 00:09:34.280 lat (usec): min=230, max=13709, avg=3114.65, stdev=1108.17 00:09:34.280 clat percentiles (usec): 00:09:34.280 | 1.00th=[ 1844], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:34.280 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:09:34.280 | 70.00th=[ 3097], 80.00th=[ 3621], 90.00th=[ 4817], 95.00th=[ 5538], 00:09:34.280 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 9372], 99.95th=[11207], 00:09:34.280 | 99.99th=[13173] 00:09:34.280 bw ( KiB/s): min=72752, max=86368, per=97.62%, avg=80181.33, stdev=6892.53, samples=3 00:09:34.280 iops : min=18188, max=21592, avg=20045.33, stdev=1723.13, samples=3 00:09:34.280 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:09:34.280 lat (msec) : 2=1.68%, 4=81.83%, 10=16.34%, 20=0.08% 00:09:34.280 cpu : usr=98.90%, sys=0.20%, ctx=16, majf=0, minf=625 00:09:34.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:34.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.280 issued rwts: total=41215,41087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.280 00:09:34.280 Run status group 0 (all jobs): 00:09:34.280 READ: bw=80.5MiB/s (84.4MB/s), 80.5MiB/s-80.5MiB/s (84.4MB/s-84.4MB/s), io=161MiB (169MB), run=2001-2001msec 00:09:34.280 WRITE: bw=80.2MiB/s (84.1MB/s), 80.2MiB/s-80.2MiB/s (84.1MB/s-84.1MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:34.280 ----------------------------------------------------- 00:09:34.280 Suppressions used: 00:09:34.280 count bytes template 00:09:34.280 1 32 /usr/src/fio/parse.c 00:09:34.280 1 8 libtcmalloc_minimal.so 00:09:34.280 ----------------------------------------------------- 00:09:34.280 00:09:34.280 10:04:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:34.280 10:04:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:34.280 10:04:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:34.280 10:04:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:34.538 10:04:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:34.538 10:04:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:34.797 10:04:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:34.797 10:04:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:34.797 10:04:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:34.797 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:34.797 fio-3.35 00:09:34.797 Starting 1 thread 00:09:41.388 00:09:41.388 test: (groupid=0, jobs=1): err= 0: pid=76111: Sun Nov 3 10:04:09 2024 00:09:41.388 read: IOPS=18.8k, BW=73.3MiB/s (76.9MB/s)(147MiB/2001msec) 00:09:41.388 slat (nsec): min=4212, max=82747, avg=5520.31, stdev=3019.63 00:09:41.388 clat (usec): min=838, max=11459, avg=3390.23, stdev=1184.70 00:09:41.388 lat (usec): min=842, max=11542, avg=3395.75, stdev=1186.08 00:09:41.388 clat percentiles (usec): 00:09:41.388 | 1.00th=[ 2073], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:41.388 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3163], 00:09:41.388 | 70.00th=[ 3523], 80.00th=[ 4293], 90.00th=[ 5211], 95.00th=[ 5932], 00:09:41.388 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[ 8979], 99.95th=[10159], 00:09:41.388 | 99.99th=[11338] 00:09:41.388 bw ( KiB/s): min=69720, max=78824, per=100.00%, avg=75077.33, stdev=4760.92, samples=3 00:09:41.388 iops : min=17430, max=19706, avg=18769.33, stdev=1190.23, samples=3 00:09:41.388 write: IOPS=18.8k, BW=73.3MiB/s (76.9MB/s)(147MiB/2001msec); 0 zone resets 00:09:41.388 slat (nsec): min=4283, max=74294, avg=5729.15, stdev=2961.14 00:09:41.388 clat (usec): min=830, max=11386, avg=3407.71, stdev=1190.72 00:09:41.388 lat (usec): min=834, max=11401, avg=3413.43, stdev=1192.08 00:09:41.388 clat percentiles (usec): 00:09:41.388 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:41.388 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3163], 00:09:41.388 | 70.00th=[ 3556], 80.00th=[ 4293], 90.00th=[ 5276], 95.00th=[ 5932], 00:09:41.388 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[ 9110], 99.95th=[10290], 00:09:41.388 | 99.99th=[11338] 00:09:41.388 bw ( KiB/s): min=70064, max=78632, per=99.97%, avg=75077.33, stdev=4466.37, samples=3 00:09:41.388 iops : min=17516, max=19658, avg=18769.33, stdev=1116.59, samples=3 00:09:41.388 lat (usec) : 1000=0.03% 00:09:41.388 lat (msec) : 2=0.64%, 4=75.76%, 10=23.52%, 20=0.06% 00:09:41.388 cpu : usr=99.00%, sys=0.05%, ctx=2, majf=0, minf=626 00:09:41.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:41.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.388 issued rwts: total=37556,37569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.388 00:09:41.388 Run status group 0 (all jobs): 00:09:41.388 READ: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:41.388 WRITE: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:41.388 ----------------------------------------------------- 00:09:41.388 Suppressions used: 00:09:41.388 count bytes template 00:09:41.388 1 32 /usr/src/fio/parse.c 00:09:41.388 1 8 libtcmalloc_minimal.so 00:09:41.388 ----------------------------------------------------- 00:09:41.388 00:09:41.388 10:04:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:41.388 10:04:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:41.388 10:04:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:41.388 10:04:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:41.388 10:04:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:41.388 10:04:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:41.646 10:04:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:41.646 10:04:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:41.646 10:04:09 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:41.646 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:41.646 fio-3.35 00:09:41.646 Starting 1 thread 00:09:48.206 00:09:48.206 test: (groupid=0, jobs=1): err= 0: pid=76166: Sun Nov 3 10:04:16 2024 00:09:48.206 read: IOPS=17.2k, BW=67.1MiB/s (70.4MB/s)(134MiB/2001msec) 00:09:48.206 slat (nsec): min=4196, max=71465, avg=5689.29, stdev=3163.41 00:09:48.206 clat (usec): min=362, max=16516, avg=3698.56, stdev=1330.20 00:09:48.206 lat (usec): min=373, max=16576, avg=3704.25, stdev=1331.47 00:09:48.206 clat percentiles (usec): 00:09:48.206 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:09:48.206 | 30.00th=[ 2737], 40.00th=[ 2933], 50.00th=[ 3195], 60.00th=[ 3687], 00:09:48.206 | 70.00th=[ 4293], 80.00th=[ 4883], 90.00th=[ 5604], 95.00th=[ 6128], 00:09:48.206 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[13304], 99.95th=[15008], 00:09:48.206 | 99.99th=[16450] 00:09:48.206 bw ( KiB/s): min=66616, max=69128, per=98.62%, avg=67808.00, stdev=1260.88, samples=3 00:09:48.206 iops : min=16654, max=17282, avg=16952.00, stdev=315.22, samples=3 00:09:48.206 write: IOPS=17.2k, BW=67.2MiB/s (70.5MB/s)(135MiB/2001msec); 0 zone resets 00:09:48.206 slat (usec): min=4, max=331, avg= 5.92, stdev= 3.63 00:09:48.206 clat (usec): min=321, max=16434, avg=3720.23, stdev=1349.97 00:09:48.206 lat (usec): min=333, max=16447, avg=3726.15, stdev=1351.26 00:09:48.206 clat percentiles (usec): 00:09:48.206 | 1.00th=[ 2089], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2606], 00:09:48.206 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3195], 60.00th=[ 3720], 00:09:48.206 | 70.00th=[ 4359], 80.00th=[ 4948], 90.00th=[ 5604], 95.00th=[ 6128], 00:09:48.206 | 99.00th=[ 7111], 99.50th=[ 7635], 99.90th=[13698], 99.95th=[15139], 00:09:48.206 | 99.99th=[16319] 00:09:48.206 bw ( KiB/s): min=66248, max=68944, per=98.30%, avg=67674.67, stdev=1354.87, samples=3 00:09:48.206 iops : min=16562, max=17236, avg=16918.67, stdev=338.72, samples=3 00:09:48.206 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:48.206 lat (msec) : 2=0.57%, 4=64.14%, 10=35.09%, 20=0.19% 00:09:48.206 cpu : usr=98.75%, sys=0.15%, ctx=6, majf=0, minf=625 00:09:48.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:48.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.206 issued rwts: total=34394,34438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.206 00:09:48.206 Run status group 0 (all jobs): 00:09:48.206 READ: bw=67.1MiB/s (70.4MB/s), 67.1MiB/s-67.1MiB/s (70.4MB/s-70.4MB/s), io=134MiB (141MB), run=2001-2001msec 00:09:48.206 WRITE: bw=67.2MiB/s (70.5MB/s), 67.2MiB/s-67.2MiB/s (70.5MB/s-70.5MB/s), io=135MiB (141MB), run=2001-2001msec 00:09:48.206 ----------------------------------------------------- 00:09:48.206 Suppressions used: 00:09:48.206 count bytes template 00:09:48.206 1 32 /usr/src/fio/parse.c 00:09:48.206 1 8 libtcmalloc_minimal.so 00:09:48.206 ----------------------------------------------------- 00:09:48.206 00:09:48.206 10:04:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:48.206 10:04:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:48.206 10:04:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:48.206 10:04:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:48.206 10:04:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:48.206 10:04:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:48.465 10:04:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:48.465 10:04:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:48.465 10:04:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:48.724 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:48.724 fio-3.35 00:09:48.724 Starting 1 thread 00:09:53.990 00:09:53.990 test: (groupid=0, jobs=1): err= 0: pid=76221: Sun Nov 3 10:04:21 2024 00:09:53.990 read: IOPS=18.2k, BW=71.2MiB/s (74.6MB/s)(142MiB/2001msec) 00:09:53.990 slat (nsec): min=4226, max=72169, avg=5588.42, stdev=3094.00 00:09:53.990 clat (usec): min=576, max=10781, avg=3496.33, stdev=1255.10 00:09:53.990 lat (usec): min=581, max=10826, avg=3501.92, stdev=1256.35 00:09:53.990 clat percentiles (usec): 00:09:53.990 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2507], 00:09:53.990 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3392], 00:09:53.990 | 70.00th=[ 3982], 80.00th=[ 4621], 90.00th=[ 5342], 95.00th=[ 5932], 00:09:53.990 | 99.00th=[ 7177], 99.50th=[ 7832], 99.90th=[ 9241], 99.95th=[10028], 00:09:53.990 | 99.99th=[10683] 00:09:53.990 bw ( KiB/s): min=66992, max=79712, per=100.00%, avg=74746.67, stdev=6803.30, samples=3 00:09:53.990 iops : min=16748, max=19928, avg=18686.67, stdev=1700.82, samples=3 00:09:53.990 write: IOPS=18.2k, BW=71.2MiB/s (74.7MB/s)(143MiB/2001msec); 0 zone resets 00:09:53.990 slat (nsec): min=4286, max=87387, avg=5754.14, stdev=3172.76 00:09:53.990 clat (usec): min=626, max=10709, avg=3503.47, stdev=1260.48 00:09:53.990 lat (usec): min=632, max=10723, avg=3509.22, stdev=1261.75 00:09:53.990 clat percentiles (usec): 00:09:53.990 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:53.990 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3359], 00:09:53.990 | 70.00th=[ 3982], 80.00th=[ 4621], 90.00th=[ 5342], 95.00th=[ 5997], 00:09:53.990 | 99.00th=[ 7242], 99.50th=[ 7898], 99.90th=[ 9503], 99.95th=[10028], 00:09:53.990 | 99.99th=[10552] 00:09:53.990 bw ( KiB/s): min=67176, max=79640, per=100.00%, avg=74786.67, stdev=6673.83, samples=3 00:09:53.990 iops : min=16794, max=19910, avg=18696.67, stdev=1668.46, samples=3 00:09:53.990 lat (usec) : 750=0.02%, 1000=0.01% 00:09:53.990 lat (msec) : 2=1.04%, 4=69.24%, 10=29.64%, 20=0.05% 00:09:53.990 cpu : usr=98.75%, sys=0.20%, ctx=6, majf=0, minf=624 00:09:53.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:53.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.990 issued rwts: total=36466,36496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.990 00:09:53.990 Run status group 0 (all jobs): 00:09:53.990 READ: bw=71.2MiB/s (74.6MB/s), 71.2MiB/s-71.2MiB/s (74.6MB/s-74.6MB/s), io=142MiB (149MB), run=2001-2001msec 00:09:53.990 WRITE: bw=71.2MiB/s (74.7MB/s), 71.2MiB/s-71.2MiB/s (74.7MB/s-74.7MB/s), io=143MiB (149MB), run=2001-2001msec 00:09:53.990 ----------------------------------------------------- 00:09:53.990 Suppressions used: 00:09:53.990 count bytes template 00:09:53.990 1 32 /usr/src/fio/parse.c 00:09:53.990 1 8 libtcmalloc_minimal.so 00:09:53.991 ----------------------------------------------------- 00:09:53.991 00:09:53.991 10:04:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:53.991 10:04:22 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:53.991 00:09:53.991 real 0m25.472s 00:09:53.991 user 0m17.970s 00:09:53.991 sys 0m12.386s 00:09:53.991 10:04:22 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.991 10:04:22 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:53.991 ************************************ 00:09:53.991 END TEST nvme_fio 00:09:53.991 ************************************ 00:09:53.991 ************************************ 00:09:53.991 END TEST nvme 00:09:53.991 ************************************ 00:09:53.991 00:09:53.991 real 1m33.426s 00:09:53.991 user 3m32.886s 00:09:53.991 sys 0m22.610s 00:09:53.991 10:04:22 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.991 10:04:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.991 10:04:22 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:53.991 10:04:22 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:53.991 10:04:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:53.991 10:04:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.991 10:04:22 -- common/autotest_common.sh@10 -- # set +x 00:09:53.991 ************************************ 00:09:53.991 START TEST nvme_scc 00:09:53.991 ************************************ 00:09:53.991 10:04:22 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:53.991 * Looking for test storage... 00:09:53.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:53.991 10:04:22 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:53.991 10:04:22 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:53.991 10:04:22 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:54.251 10:04:22 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:54.251 10:04:22 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.251 10:04:22 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.251 --rc genhtml_branch_coverage=1 00:09:54.251 --rc genhtml_function_coverage=1 00:09:54.251 --rc genhtml_legend=1 00:09:54.251 --rc geninfo_all_blocks=1 00:09:54.251 --rc geninfo_unexecuted_blocks=1 00:09:54.251 00:09:54.251 ' 00:09:54.251 10:04:22 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.251 --rc genhtml_branch_coverage=1 00:09:54.251 --rc genhtml_function_coverage=1 00:09:54.251 --rc genhtml_legend=1 00:09:54.251 --rc geninfo_all_blocks=1 00:09:54.251 --rc geninfo_unexecuted_blocks=1 00:09:54.251 00:09:54.251 ' 00:09:54.251 10:04:22 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.251 --rc genhtml_branch_coverage=1 00:09:54.251 --rc genhtml_function_coverage=1 00:09:54.251 --rc genhtml_legend=1 00:09:54.251 --rc geninfo_all_blocks=1 00:09:54.251 --rc geninfo_unexecuted_blocks=1 00:09:54.251 00:09:54.251 ' 00:09:54.251 10:04:22 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.251 --rc genhtml_branch_coverage=1 00:09:54.251 --rc genhtml_function_coverage=1 00:09:54.251 --rc genhtml_legend=1 00:09:54.251 --rc geninfo_all_blocks=1 00:09:54.251 --rc geninfo_unexecuted_blocks=1 00:09:54.251 00:09:54.251 ' 00:09:54.251 10:04:22 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.251 10:04:22 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.251 10:04:22 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.251 10:04:22 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.251 10:04:22 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.251 10:04:22 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:54.251 10:04:22 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:54.251 10:04:22 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:54.251 10:04:22 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:54.251 10:04:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:54.251 10:04:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:54.251 10:04:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:54.251 10:04:22 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:54.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.511 Waiting for block devices as requested 00:09:54.770 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.770 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.770 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.031 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.430 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:00.430 10:04:28 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:00.430 10:04:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:00.430 10:04:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:00.430 10:04:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.430 10:04:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.430 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.431 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.432 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.433 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.434 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.435 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:00.436 10:04:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:00.436 10:04:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:00.436 10:04:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.436 10:04:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.436 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:00.437 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:00.438 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:00.439 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.440 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:00.441 10:04:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:00.441 10:04:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:00.441 10:04:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.441 10:04:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:00.441 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.442 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.443 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:00.444 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:00.445 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.446 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:00.447 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.448 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:00.449 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:00.450 10:04:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:00.450 10:04:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:00.450 10:04:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.450 10:04:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.450 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:00.451 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:00.452 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.453 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:00.454 10:04:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:00.454 10:04:28 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:00.454 10:04:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:00.454 10:04:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:00.455 10:04:28 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:00.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.292 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.292 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.292 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.554 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.554 10:04:29 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:01.554 10:04:29 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:01.554 10:04:29 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.554 10:04:29 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:01.554 ************************************ 00:10:01.554 START TEST nvme_simple_copy 00:10:01.554 ************************************ 00:10:01.554 10:04:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:01.816 Initializing NVMe Controllers 00:10:01.816 Attaching to 0000:00:10.0 00:10:01.816 Controller supports SCC. Attached to 0000:00:10.0 00:10:01.816 Namespace ID: 1 size: 6GB 00:10:01.816 Initialization complete. 00:10:01.816 00:10:01.816 Controller QEMU NVMe Ctrl (12340 ) 00:10:01.816 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:01.816 Namespace Block Size:4096 00:10:01.816 Writing LBAs 0 to 63 with Random Data 00:10:01.816 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:01.816 LBAs matching Written Data: 64 00:10:01.816 00:10:01.816 real 0m0.259s 00:10:01.816 user 0m0.107s 00:10:01.816 sys 0m0.051s 00:10:01.816 10:04:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.816 ************************************ 00:10:01.816 END TEST nvme_simple_copy 00:10:01.816 ************************************ 00:10:01.816 10:04:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:01.816 ************************************ 00:10:01.816 END TEST nvme_scc 00:10:01.816 ************************************ 00:10:01.816 00:10:01.816 real 0m7.819s 00:10:01.816 user 0m1.081s 00:10:01.816 sys 0m1.446s 00:10:01.816 10:04:30 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.816 10:04:30 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:01.816 10:04:30 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:01.816 10:04:30 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:01.816 10:04:30 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:01.816 10:04:30 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:01.816 10:04:30 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:01.816 10:04:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:01.816 10:04:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.816 10:04:30 -- common/autotest_common.sh@10 -- # set +x 00:10:01.816 ************************************ 00:10:01.816 START TEST nvme_fdp 00:10:01.816 ************************************ 00:10:01.816 10:04:30 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:02.077 * Looking for test storage... 00:10:02.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:02.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.077 --rc genhtml_branch_coverage=1 00:10:02.077 --rc genhtml_function_coverage=1 00:10:02.077 --rc genhtml_legend=1 00:10:02.077 --rc geninfo_all_blocks=1 00:10:02.077 --rc geninfo_unexecuted_blocks=1 00:10:02.077 00:10:02.077 ' 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:02.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.077 --rc genhtml_branch_coverage=1 00:10:02.077 --rc genhtml_function_coverage=1 00:10:02.077 --rc genhtml_legend=1 00:10:02.077 --rc geninfo_all_blocks=1 00:10:02.077 --rc geninfo_unexecuted_blocks=1 00:10:02.077 00:10:02.077 ' 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:02.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.077 --rc genhtml_branch_coverage=1 00:10:02.077 --rc genhtml_function_coverage=1 00:10:02.077 --rc genhtml_legend=1 00:10:02.077 --rc geninfo_all_blocks=1 00:10:02.077 --rc geninfo_unexecuted_blocks=1 00:10:02.077 00:10:02.077 ' 00:10:02.077 10:04:30 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:02.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.077 --rc genhtml_branch_coverage=1 00:10:02.077 --rc genhtml_function_coverage=1 00:10:02.077 --rc genhtml_legend=1 00:10:02.077 --rc geninfo_all_blocks=1 00:10:02.077 --rc geninfo_unexecuted_blocks=1 00:10:02.077 00:10:02.077 ' 00:10:02.077 10:04:30 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.077 10:04:30 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.077 10:04:30 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.077 10:04:30 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.077 10:04:30 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.077 10:04:30 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:02.077 10:04:30 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:02.077 10:04:30 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:02.077 10:04:30 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.077 10:04:30 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:02.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:02.600 Waiting for block devices as requested 00:10:02.600 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.600 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.600 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.862 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.169 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:08.169 10:04:36 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:08.169 10:04:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:08.169 10:04:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:08.169 10:04:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:08.169 10:04:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:08.169 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.170 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:08.171 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:08.172 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:08.173 10:04:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:08.173 10:04:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:08.173 10:04:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:08.173 10:04:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.173 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.174 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:08.175 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.176 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.177 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:08.178 10:04:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:08.178 10:04:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:08.178 10:04:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:08.178 10:04:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.178 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:08.179 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:08.180 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.181 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:08.182 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:08.183 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.184 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:08.185 10:04:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:08.185 10:04:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:08.185 10:04:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:08.186 10:04:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:08.186 10:04:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:08.186 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:08.187 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.188 10:04:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:08.189 10:04:36 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:08.189 10:04:36 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:08.189 10:04:36 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:08.189 10:04:36 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:08.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.335 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.335 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.335 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.335 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.335 10:04:37 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:09.335 10:04:37 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:09.335 10:04:37 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.335 10:04:37 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:09.335 ************************************ 00:10:09.335 START TEST nvme_flexible_data_placement 00:10:09.335 ************************************ 00:10:09.335 10:04:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:09.596 Initializing NVMe Controllers 00:10:09.596 Attaching to 0000:00:13.0 00:10:09.596 Controller supports FDP Attached to 0000:00:13.0 00:10:09.596 Namespace ID: 1 Endurance Group ID: 1 00:10:09.596 Initialization complete. 00:10:09.596 00:10:09.596 ================================== 00:10:09.596 == FDP tests for Namespace: #01 == 00:10:09.596 ================================== 00:10:09.596 00:10:09.596 Get Feature: FDP: 00:10:09.596 ================= 00:10:09.596 Enabled: Yes 00:10:09.596 FDP configuration Index: 0 00:10:09.596 00:10:09.596 FDP configurations log page 00:10:09.596 =========================== 00:10:09.596 Number of FDP configurations: 1 00:10:09.596 Version: 0 00:10:09.596 Size: 112 00:10:09.596 FDP Configuration Descriptor: 0 00:10:09.596 Descriptor Size: 96 00:10:09.596 Reclaim Group Identifier format: 2 00:10:09.596 FDP Volatile Write Cache: Not Present 00:10:09.596 FDP Configuration: Valid 00:10:09.596 Vendor Specific Size: 0 00:10:09.596 Number of Reclaim Groups: 2 00:10:09.596 Number of Recalim Unit Handles: 8 00:10:09.596 Max Placement Identifiers: 128 00:10:09.596 Number of Namespaces Suppprted: 256 00:10:09.596 Reclaim unit Nominal Size: 6000000 bytes 00:10:09.596 Estimated Reclaim Unit Time Limit: Not Reported 00:10:09.596 RUH Desc #000: RUH Type: Initially Isolated 00:10:09.596 RUH Desc #001: RUH Type: Initially Isolated 00:10:09.596 RUH Desc #002: RUH Type: Initially Isolated 00:10:09.596 RUH Desc #003: RUH Type: Initially Isolated 00:10:09.596 RUH Desc #004: RUH Type: Initially Isolated 00:10:09.596 RUH Desc #005: RUH Type: Initially Isolated 00:10:09.596 RUH Desc #006: RUH Type: Initially Isolated 00:10:09.596 RUH Desc #007: RUH Type: Initially Isolated 00:10:09.596 00:10:09.596 FDP reclaim unit handle usage log page 00:10:09.596 ====================================== 00:10:09.596 Number of Reclaim Unit Handles: 8 00:10:09.596 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:09.596 RUH Usage Desc #001: RUH Attributes: Unused 00:10:09.596 RUH Usage Desc #002: RUH Attributes: Unused 00:10:09.596 RUH Usage Desc #003: RUH Attributes: Unused 00:10:09.596 RUH Usage Desc #004: RUH Attributes: Unused 00:10:09.596 RUH Usage Desc #005: RUH Attributes: Unused 00:10:09.596 RUH Usage Desc #006: RUH Attributes: Unused 00:10:09.596 RUH Usage Desc #007: RUH Attributes: Unused 00:10:09.596 00:10:09.596 FDP statistics log page 00:10:09.596 ======================= 00:10:09.596 Host bytes with metadata written: 2167451648 00:10:09.596 Media bytes with metadata written: 2167812096 00:10:09.596 Media bytes erased: 0 00:10:09.596 00:10:09.596 FDP Reclaim unit handle status 00:10:09.596 ============================== 00:10:09.596 Number of RUHS descriptors: 2 00:10:09.596 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002cf5 00:10:09.596 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:09.596 00:10:09.596 FDP write on placement id: 0 success 00:10:09.596 00:10:09.596 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:09.596 00:10:09.596 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:09.596 00:10:09.596 Get Feature: FDP Events for Placement handle: #0 00:10:09.596 ======================== 00:10:09.596 Number of FDP Events: 6 00:10:09.596 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:09.596 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:09.596 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:09.596 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:09.596 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:09.596 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:09.596 00:10:09.596 FDP events log page 00:10:09.596 =================== 00:10:09.596 Number of FDP events: 1 00:10:09.596 FDP Event #0: 00:10:09.596 Event Type: RU Not Written to Capacity 00:10:09.596 Placement Identifier: Valid 00:10:09.596 NSID: Valid 00:10:09.596 Location: Valid 00:10:09.596 Placement Identifier: 0 00:10:09.596 Event Timestamp: 5 00:10:09.596 Namespace Identifier: 1 00:10:09.596 Reclaim Group Identifier: 0 00:10:09.596 Reclaim Unit Handle Identifier: 0 00:10:09.596 00:10:09.596 FDP test passed 00:10:09.596 ************************************ 00:10:09.596 END TEST nvme_flexible_data_placement 00:10:09.596 00:10:09.596 real 0m0.206s 00:10:09.596 user 0m0.047s 00:10:09.596 sys 0m0.057s 00:10:09.596 10:04:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.596 10:04:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:09.596 ************************************ 00:10:09.596 ************************************ 00:10:09.596 END TEST nvme_fdp 00:10:09.596 ************************************ 00:10:09.596 00:10:09.596 real 0m7.792s 00:10:09.596 user 0m1.012s 00:10:09.596 sys 0m1.479s 00:10:09.596 10:04:37 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.596 10:04:37 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:09.859 10:04:37 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:09.859 10:04:37 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:09.859 10:04:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:09.859 10:04:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.859 10:04:37 -- common/autotest_common.sh@10 -- # set +x 00:10:09.859 ************************************ 00:10:09.859 START TEST nvme_rpc 00:10:09.859 ************************************ 00:10:09.859 10:04:37 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:09.859 * Looking for test storage... 00:10:09.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.859 10:04:38 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:09.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.859 --rc genhtml_branch_coverage=1 00:10:09.859 --rc genhtml_function_coverage=1 00:10:09.859 --rc genhtml_legend=1 00:10:09.859 --rc geninfo_all_blocks=1 00:10:09.859 --rc geninfo_unexecuted_blocks=1 00:10:09.859 00:10:09.859 ' 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:09.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.859 --rc genhtml_branch_coverage=1 00:10:09.859 --rc genhtml_function_coverage=1 00:10:09.859 --rc genhtml_legend=1 00:10:09.859 --rc geninfo_all_blocks=1 00:10:09.859 --rc geninfo_unexecuted_blocks=1 00:10:09.859 00:10:09.859 ' 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:09.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.859 --rc genhtml_branch_coverage=1 00:10:09.859 --rc genhtml_function_coverage=1 00:10:09.859 --rc genhtml_legend=1 00:10:09.859 --rc geninfo_all_blocks=1 00:10:09.859 --rc geninfo_unexecuted_blocks=1 00:10:09.859 00:10:09.859 ' 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:09.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.859 --rc genhtml_branch_coverage=1 00:10:09.859 --rc genhtml_function_coverage=1 00:10:09.859 --rc genhtml_legend=1 00:10:09.859 --rc geninfo_all_blocks=1 00:10:09.859 --rc geninfo_unexecuted_blocks=1 00:10:09.859 00:10:09.859 ' 00:10:09.859 10:04:38 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.859 10:04:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:09.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.859 10:04:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:09.859 10:04:38 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77580 00:10:09.859 10:04:38 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:09.859 10:04:38 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:09.859 10:04:38 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77580 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77580 ']' 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.859 10:04:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.121 [2024-11-03 10:04:38.275411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:10.121 [2024-11-03 10:04:38.275545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77580 ] 00:10:10.121 [2024-11-03 10:04:38.410241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:10.121 [2024-11-03 10:04:38.462204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.121 [2024-11-03 10:04:38.462352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.093 10:04:39 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.093 10:04:39 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:11.093 10:04:39 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:11.093 Nvme0n1 00:10:11.093 10:04:39 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:11.093 10:04:39 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:11.354 request: 00:10:11.354 { 00:10:11.354 "bdev_name": "Nvme0n1", 00:10:11.354 "filename": "non_existing_file", 00:10:11.354 "method": "bdev_nvme_apply_firmware", 00:10:11.354 "req_id": 1 00:10:11.354 } 00:10:11.354 Got JSON-RPC error response 00:10:11.354 response: 00:10:11.354 { 00:10:11.354 "code": -32603, 00:10:11.354 "message": "open file failed." 00:10:11.354 } 00:10:11.354 10:04:39 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:11.354 10:04:39 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:11.354 10:04:39 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:11.615 10:04:39 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:11.615 10:04:39 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77580 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77580 ']' 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77580 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77580 00:10:11.615 killing process with pid 77580 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77580' 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77580 00:10:11.615 10:04:39 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77580 00:10:12.186 ************************************ 00:10:12.186 END TEST nvme_rpc 00:10:12.186 ************************************ 00:10:12.186 00:10:12.186 real 0m2.300s 00:10:12.186 user 0m4.382s 00:10:12.186 sys 0m0.578s 00:10:12.186 10:04:40 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.186 10:04:40 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.186 10:04:40 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:12.186 10:04:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:12.186 10:04:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.186 10:04:40 -- common/autotest_common.sh@10 -- # set +x 00:10:12.186 ************************************ 00:10:12.186 START TEST nvme_rpc_timeouts 00:10:12.186 ************************************ 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:12.186 * Looking for test storage... 00:10:12.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.186 10:04:40 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.186 --rc genhtml_branch_coverage=1 00:10:12.186 --rc genhtml_function_coverage=1 00:10:12.186 --rc genhtml_legend=1 00:10:12.186 --rc geninfo_all_blocks=1 00:10:12.186 --rc geninfo_unexecuted_blocks=1 00:10:12.186 00:10:12.186 ' 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.186 --rc genhtml_branch_coverage=1 00:10:12.186 --rc genhtml_function_coverage=1 00:10:12.186 --rc genhtml_legend=1 00:10:12.186 --rc geninfo_all_blocks=1 00:10:12.186 --rc geninfo_unexecuted_blocks=1 00:10:12.186 00:10:12.186 ' 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.186 --rc genhtml_branch_coverage=1 00:10:12.186 --rc genhtml_function_coverage=1 00:10:12.186 --rc genhtml_legend=1 00:10:12.186 --rc geninfo_all_blocks=1 00:10:12.186 --rc geninfo_unexecuted_blocks=1 00:10:12.186 00:10:12.186 ' 00:10:12.186 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.186 --rc genhtml_branch_coverage=1 00:10:12.186 --rc genhtml_function_coverage=1 00:10:12.186 --rc genhtml_legend=1 00:10:12.186 --rc geninfo_all_blocks=1 00:10:12.186 --rc geninfo_unexecuted_blocks=1 00:10:12.187 00:10:12.187 ' 00:10:12.187 10:04:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:12.187 10:04:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77642 00:10:12.187 10:04:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77642 00:10:12.187 10:04:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77674 00:10:12.187 10:04:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:12.187 10:04:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77674 00:10:12.187 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 77674 ']' 00:10:12.187 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.187 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.187 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.187 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.187 10:04:40 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:12.187 10:04:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:12.448 [2024-11-03 10:04:40.579091] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:12.448 [2024-11-03 10:04:40.579268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77674 ] 00:10:12.448 [2024-11-03 10:04:40.717181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:12.448 [2024-11-03 10:04:40.768713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.448 [2024-11-03 10:04:40.768788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.391 Checking default timeout settings: 00:10:13.391 10:04:41 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.391 10:04:41 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:13.391 10:04:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:13.391 10:04:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:13.652 Making settings changes with rpc: 00:10:13.652 10:04:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:13.652 10:04:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:13.652 Check default vs. modified settings: 00:10:13.652 10:04:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:13.652 10:04:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:14.224 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:14.224 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77642 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77642 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:14.225 Setting action_on_timeout is changed as expected. 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77642 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77642 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:14.225 Setting timeout_us is changed as expected. 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77642 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77642 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:14.225 Setting timeout_admin_us is changed as expected. 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77642 /tmp/settings_modified_77642 00:10:14.225 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77674 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 77674 ']' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 77674 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77674 00:10:14.225 killing process with pid 77674 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77674' 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 77674 00:10:14.225 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 77674 00:10:14.486 RPC TIMEOUT SETTING TEST PASSED. 00:10:14.486 10:04:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:14.486 00:10:14.486 real 0m2.445s 00:10:14.486 user 0m4.800s 00:10:14.486 sys 0m0.571s 00:10:14.486 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.486 ************************************ 00:10:14.486 END TEST nvme_rpc_timeouts 00:10:14.486 ************************************ 00:10:14.486 10:04:42 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:14.486 10:04:42 -- spdk/autotest.sh@239 -- # uname -s 00:10:14.748 10:04:42 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:14.748 10:04:42 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:14.748 10:04:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.748 10:04:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.748 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:10:14.748 ************************************ 00:10:14.748 START TEST sw_hotplug 00:10:14.748 ************************************ 00:10:14.748 10:04:42 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:14.748 * Looking for test storage... 00:10:14.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:14.748 10:04:42 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:14.748 10:04:42 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:14.748 10:04:42 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:14.748 10:04:42 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.748 10:04:42 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.748 10:04:43 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:14.748 10:04:43 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.748 10:04:43 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:14.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.748 --rc genhtml_branch_coverage=1 00:10:14.748 --rc genhtml_function_coverage=1 00:10:14.748 --rc genhtml_legend=1 00:10:14.748 --rc geninfo_all_blocks=1 00:10:14.748 --rc geninfo_unexecuted_blocks=1 00:10:14.748 00:10:14.748 ' 00:10:14.748 10:04:43 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:14.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.748 --rc genhtml_branch_coverage=1 00:10:14.748 --rc genhtml_function_coverage=1 00:10:14.748 --rc genhtml_legend=1 00:10:14.748 --rc geninfo_all_blocks=1 00:10:14.748 --rc geninfo_unexecuted_blocks=1 00:10:14.748 00:10:14.748 ' 00:10:14.748 10:04:43 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:14.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.748 --rc genhtml_branch_coverage=1 00:10:14.748 --rc genhtml_function_coverage=1 00:10:14.748 --rc genhtml_legend=1 00:10:14.748 --rc geninfo_all_blocks=1 00:10:14.748 --rc geninfo_unexecuted_blocks=1 00:10:14.748 00:10:14.748 ' 00:10:14.748 10:04:43 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:14.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.748 --rc genhtml_branch_coverage=1 00:10:14.748 --rc genhtml_function_coverage=1 00:10:14.748 --rc genhtml_legend=1 00:10:14.748 --rc geninfo_all_blocks=1 00:10:14.748 --rc geninfo_unexecuted_blocks=1 00:10:14.748 00:10:14.748 ' 00:10:14.748 10:04:43 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:15.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:15.271 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:15.271 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:15.271 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:15.271 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:15.271 10:04:43 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:15.271 10:04:43 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:15.271 10:04:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:15.271 10:04:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:15.271 10:04:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:15.272 10:04:43 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:15.272 10:04:43 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:15.272 10:04:43 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:15.272 10:04:43 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:15.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:15.794 Waiting for block devices as requested 00:10:15.794 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.794 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.055 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.055 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.347 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:21.347 10:04:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:21.347 10:04:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:21.609 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:21.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:21.609 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:21.870 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:22.131 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.131 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.131 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:22.131 10:04:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78520 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:22.392 10:04:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:22.392 10:04:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:22.392 10:04:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:22.392 10:04:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:22.392 10:04:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:22.392 10:04:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:22.392 Initializing NVMe Controllers 00:10:22.392 Attaching to 0000:00:10.0 00:10:22.392 Attaching to 0000:00:11.0 00:10:22.392 Attached to 0000:00:11.0 00:10:22.392 Attached to 0000:00:10.0 00:10:22.392 Initialization complete. Starting I/O... 00:10:22.392 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:22.392 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:22.392 00:10:23.773 QEMU NVMe Ctrl (12341 ): 2790 I/Os completed (+2790) 00:10:23.773 QEMU NVMe Ctrl (12340 ): 2788 I/Os completed (+2788) 00:10:23.773 00:10:24.345 QEMU NVMe Ctrl (12341 ): 5934 I/Os completed (+3144) 00:10:24.345 QEMU NVMe Ctrl (12340 ): 5934 I/Os completed (+3146) 00:10:24.345 00:10:25.731 QEMU NVMe Ctrl (12341 ): 9082 I/Os completed (+3148) 00:10:25.731 QEMU NVMe Ctrl (12340 ): 9085 I/Os completed (+3151) 00:10:25.731 00:10:26.671 QEMU NVMe Ctrl (12341 ): 13094 I/Os completed (+4012) 00:10:26.671 QEMU NVMe Ctrl (12340 ): 13100 I/Os completed (+4015) 00:10:26.671 00:10:27.614 QEMU NVMe Ctrl (12341 ): 16550 I/Os completed (+3456) 00:10:27.614 QEMU NVMe Ctrl (12340 ): 16601 I/Os completed (+3501) 00:10:27.614 00:10:28.185 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:28.185 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:28.186 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:28.186 [2024-11-03 10:04:56.527516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:28.186 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:28.186 [2024-11-03 10:04:56.529061] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.186 [2024-11-03 10:04:56.529108] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.186 [2024-11-03 10:04:56.529122] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.186 [2024-11-03 10:04:56.529136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.186 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:28.186 [2024-11-03 10:04:56.530000] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.186 [2024-11-03 10:04:56.530034] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.186 [2024-11-03 10:04:56.530045] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.186 [2024-11-03 10:04:56.530058] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.186 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:28.186 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:28.447 [2024-11-03 10:04:56.549514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:28.447 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:28.447 [2024-11-03 10:04:56.551568] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.447 [2024-11-03 10:04:56.551622] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.447 [2024-11-03 10:04:56.551647] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.447 [2024-11-03 10:04:56.551669] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.447 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:28.447 [2024-11-03 10:04:56.552527] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.447 [2024-11-03 10:04:56.552644] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.447 [2024-11-03 10:04:56.552663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.447 [2024-11-03 10:04:56.552674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:28.447 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:28.447 Attaching to 0000:00:10.0 00:10:28.447 Attached to 0000:00:10.0 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.447 10:04:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:28.447 Attaching to 0000:00:11.0 00:10:28.447 Attached to 0000:00:11.0 00:10:29.390 QEMU NVMe Ctrl (12340 ): 3145 I/Os completed (+3145) 00:10:29.390 QEMU NVMe Ctrl (12341 ): 2891 I/Os completed (+2891) 00:10:29.390 00:10:30.336 QEMU NVMe Ctrl (12340 ): 6217 I/Os completed (+3072) 00:10:30.336 QEMU NVMe Ctrl (12341 ): 5969 I/Os completed (+3078) 00:10:30.336 00:10:31.722 QEMU NVMe Ctrl (12340 ): 10095 I/Os completed (+3878) 00:10:31.722 QEMU NVMe Ctrl (12341 ): 9810 I/Os completed (+3841) 00:10:31.722 00:10:32.664 QEMU NVMe Ctrl (12340 ): 13719 I/Os completed (+3624) 00:10:32.664 QEMU NVMe Ctrl (12341 ): 13562 I/Os completed (+3752) 00:10:32.664 00:10:33.608 QEMU NVMe Ctrl (12340 ): 16763 I/Os completed (+3044) 00:10:33.608 QEMU NVMe Ctrl (12341 ): 16610 I/Os completed (+3048) 00:10:33.608 00:10:34.554 QEMU NVMe Ctrl (12340 ): 19892 I/Os completed (+3129) 00:10:34.554 QEMU NVMe Ctrl (12341 ): 19791 I/Os completed (+3181) 00:10:34.554 00:10:35.500 QEMU NVMe Ctrl (12340 ): 23036 I/Os completed (+3144) 00:10:35.500 QEMU NVMe Ctrl (12341 ): 22945 I/Os completed (+3154) 00:10:35.500 00:10:36.444 QEMU NVMe Ctrl (12340 ): 26884 I/Os completed (+3848) 00:10:36.444 QEMU NVMe Ctrl (12341 ): 26730 I/Os completed (+3785) 00:10:36.444 00:10:37.388 QEMU NVMe Ctrl (12340 ): 30008 I/Os completed (+3124) 00:10:37.388 QEMU NVMe Ctrl (12341 ): 29852 I/Os completed (+3122) 00:10:37.388 00:10:38.334 QEMU NVMe Ctrl (12340 ): 33177 I/Os completed (+3169) 00:10:38.334 QEMU NVMe Ctrl (12341 ): 33070 I/Os completed (+3218) 00:10:38.334 00:10:39.750 QEMU NVMe Ctrl (12340 ): 36237 I/Os completed (+3060) 00:10:39.750 QEMU NVMe Ctrl (12341 ): 36150 I/Os completed (+3080) 00:10:39.750 00:10:40.689 QEMU NVMe Ctrl (12340 ): 39284 I/Os completed (+3047) 00:10:40.689 QEMU NVMe Ctrl (12341 ): 39194 I/Os completed (+3044) 00:10:40.689 00:10:40.689 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:40.689 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.689 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.689 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.690 [2024-11-03 10:05:08.803033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:40.690 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:40.690 [2024-11-03 10:05:08.804336] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.804398] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.804415] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.804435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:40.690 [2024-11-03 10:05:08.805828] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.805893] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.805908] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.805926] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.690 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.690 [2024-11-03 10:05:08.828397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:40.690 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:40.690 [2024-11-03 10:05:08.829553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.829742] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.829823] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.829854] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:40.690 [2024-11-03 10:05:08.831148] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.831324] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.831353] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 [2024-11-03 10:05:08.831367] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.690 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:40.690 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:40.690 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.690 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.690 10:05:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:40.950 10:05:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:40.950 10:05:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.950 10:05:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.950 10:05:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.950 10:05:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:40.950 Attaching to 0000:00:10.0 00:10:40.950 Attached to 0000:00:10.0 00:10:40.950 10:05:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:40.950 10:05:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.950 10:05:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:40.950 Attaching to 0000:00:11.0 00:10:40.950 Attached to 0000:00:11.0 00:10:41.516 QEMU NVMe Ctrl (12340 ): 2262 I/Os completed (+2262) 00:10:41.516 QEMU NVMe Ctrl (12341 ): 2016 I/Os completed (+2016) 00:10:41.516 00:10:42.452 QEMU NVMe Ctrl (12340 ): 5757 I/Os completed (+3495) 00:10:42.452 QEMU NVMe Ctrl (12341 ): 5604 I/Os completed (+3588) 00:10:42.452 00:10:43.393 QEMU NVMe Ctrl (12340 ): 8857 I/Os completed (+3100) 00:10:43.393 QEMU NVMe Ctrl (12341 ): 8709 I/Os completed (+3105) 00:10:43.393 00:10:44.777 QEMU NVMe Ctrl (12340 ): 12037 I/Os completed (+3180) 00:10:44.777 QEMU NVMe Ctrl (12341 ): 11906 I/Os completed (+3197) 00:10:44.777 00:10:45.347 QEMU NVMe Ctrl (12340 ): 15041 I/Os completed (+3004) 00:10:45.347 QEMU NVMe Ctrl (12341 ): 14913 I/Os completed (+3007) 00:10:45.347 00:10:46.733 QEMU NVMe Ctrl (12340 ): 18088 I/Os completed (+3047) 00:10:46.733 QEMU NVMe Ctrl (12341 ): 17962 I/Os completed (+3049) 00:10:46.733 00:10:47.676 QEMU NVMe Ctrl (12340 ): 21260 I/Os completed (+3172) 00:10:47.676 QEMU NVMe Ctrl (12341 ): 21134 I/Os completed (+3172) 00:10:47.676 00:10:48.679 QEMU NVMe Ctrl (12340 ): 24300 I/Os completed (+3040) 00:10:48.679 QEMU NVMe Ctrl (12341 ): 24179 I/Os completed (+3045) 00:10:48.679 00:10:49.614 QEMU NVMe Ctrl (12340 ): 27600 I/Os completed (+3300) 00:10:49.614 QEMU NVMe Ctrl (12341 ): 27483 I/Os completed (+3304) 00:10:49.614 00:10:50.548 QEMU NVMe Ctrl (12340 ): 31435 I/Os completed (+3835) 00:10:50.548 QEMU NVMe Ctrl (12341 ): 31267 I/Os completed (+3784) 00:10:50.548 00:10:51.483 QEMU NVMe Ctrl (12340 ): 35175 I/Os completed (+3740) 00:10:51.483 QEMU NVMe Ctrl (12341 ): 35012 I/Os completed (+3745) 00:10:51.483 00:10:52.427 QEMU NVMe Ctrl (12340 ): 38458 I/Os completed (+3283) 00:10:52.427 QEMU NVMe Ctrl (12341 ): 38297 I/Os completed (+3285) 00:10:52.427 00:10:52.999 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:52.999 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:52.999 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.999 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.999 [2024-11-03 10:05:21.152459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:52.999 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:52.999 [2024-11-03 10:05:21.153854] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.999 [2024-11-03 10:05:21.154037] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.154078] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.154176] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:53.000 [2024-11-03 10:05:21.156822] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.156995] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.157033] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.157050] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:53.000 EAL: Scan for (pci) bus failed. 00:10:53.000 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.000 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.000 [2024-11-03 10:05:21.175329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:53.000 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:53.000 [2024-11-03 10:05:21.176441] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.176493] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.176513] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.176528] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:53.000 [2024-11-03 10:05:21.177741] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.177789] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.177805] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 [2024-11-03 10:05:21.177818] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.000 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:53.000 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:53.000 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:53.000 EAL: Scan for (pci) bus failed. 00:10:53.000 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.000 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.000 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:53.260 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:53.260 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.260 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.260 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.260 Attaching to 0000:00:10.0 00:10:53.260 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:53.260 Attached to 0000:00:10.0 00:10:53.260 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:53.260 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.260 10:05:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:53.260 Attaching to 0000:00:11.0 00:10:53.260 Attached to 0000:00:11.0 00:10:53.260 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:53.260 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:53.260 [2024-11-03 10:05:21.490851] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:05.503 10:05:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:05.503 10:05:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:05.503 10:05:33 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.96 00:11:05.503 10:05:33 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.96 00:11:05.503 10:05:33 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:05.503 10:05:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.96 00:11:05.503 10:05:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.96 2 00:11:05.503 remove_attach_helper took 42.96s to complete (handling 2 nvme drive(s)) 10:05:33 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78520 00:11:12.091 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78520) - No such process 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78520 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79068 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:12.091 10:05:39 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79068 00:11:12.091 10:05:39 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79068 ']' 00:11:12.091 10:05:39 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.091 10:05:39 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.091 10:05:39 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.091 10:05:39 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.091 10:05:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.091 [2024-11-03 10:05:39.578816] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:12.091 [2024-11-03 10:05:39.579805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79068 ] 00:11:12.091 [2024-11-03 10:05:39.717975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.091 [2024-11-03 10:05:39.771942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:12.091 10:05:40 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:12.091 10:05:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.680 10:05:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.680 10:05:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.680 10:05:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:18.680 10:05:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:18.680 [2024-11-03 10:05:46.536914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:18.680 [2024-11-03 10:05:46.537990] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.680 [2024-11-03 10:05:46.538023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.680 [2024-11-03 10:05:46.538038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.680 [2024-11-03 10:05:46.538051] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.680 [2024-11-03 10:05:46.538059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.680 [2024-11-03 10:05:46.538066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.680 [2024-11-03 10:05:46.538075] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.680 [2024-11-03 10:05:46.538081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.680 [2024-11-03 10:05:46.538089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.680 [2024-11-03 10:05:46.538094] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.680 [2024-11-03 10:05:46.538102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.680 [2024-11-03 10:05:46.538108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.680 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:18.680 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.680 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.680 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.680 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.680 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.680 10:05:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.680 10:05:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.680 10:05:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.680 [2024-11-03 10:05:47.036910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:18.680 [2024-11-03 10:05:47.037934] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.680 [2024-11-03 10:05:47.037966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.680 [2024-11-03 10:05:47.037976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.680 [2024-11-03 10:05:47.037988] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.680 [2024-11-03 10:05:47.037995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.680 [2024-11-03 10:05:47.038003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.680 [2024-11-03 10:05:47.038009] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.680 [2024-11-03 10:05:47.038017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.680 [2024-11-03 10:05:47.038023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.680 [2024-11-03 10:05:47.038033] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.680 [2024-11-03 10:05:47.038039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.680 [2024-11-03 10:05:47.038046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.941 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:18.941 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.202 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:19.202 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.202 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.202 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.202 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.202 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.202 10:05:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.202 10:05:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.463 10:05:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.463 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.724 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.724 10:05:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.960 10:05:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.960 10:05:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.960 10:05:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.960 10:05:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.960 10:05:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.960 10:05:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:31.960 10:05:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.960 [2024-11-03 10:05:59.937106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:31.960 [2024-11-03 10:05:59.938145] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.960 [2024-11-03 10:05:59.938172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.960 [2024-11-03 10:05:59.938184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.960 [2024-11-03 10:05:59.938195] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.960 [2024-11-03 10:05:59.938203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.960 [2024-11-03 10:05:59.938210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.960 [2024-11-03 10:05:59.938217] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.960 [2024-11-03 10:05:59.938234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.960 [2024-11-03 10:05:59.938242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.960 [2024-11-03 10:05:59.938249] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.960 [2024-11-03 10:05:59.938258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.960 [2024-11-03 10:05:59.938264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.221 [2024-11-03 10:06:00.337113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:32.221 [2024-11-03 10:06:00.338193] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.221 [2024-11-03 10:06:00.338239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.221 [2024-11-03 10:06:00.338251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.221 [2024-11-03 10:06:00.338265] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.221 [2024-11-03 10:06:00.338273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.221 [2024-11-03 10:06:00.338282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.221 [2024-11-03 10:06:00.338289] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.221 [2024-11-03 10:06:00.338297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.221 [2024-11-03 10:06:00.338304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.221 [2024-11-03 10:06:00.338313] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.221 [2024-11-03 10:06:00.338320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.221 [2024-11-03 10:06:00.338329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.221 10:06:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.221 10:06:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.221 10:06:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.221 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:32.483 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:32.483 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.483 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.483 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.483 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:32.483 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:32.483 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.483 10:06:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.718 10:06:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.718 10:06:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.718 10:06:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.718 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.718 10:06:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.718 10:06:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.719 10:06:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.719 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:44.719 10:06:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.719 [2024-11-03 10:06:12.837324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:44.719 [2024-11-03 10:06:12.838386] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.719 [2024-11-03 10:06:12.838414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.719 [2024-11-03 10:06:12.838428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.719 [2024-11-03 10:06:12.838441] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.719 [2024-11-03 10:06:12.838450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.719 [2024-11-03 10:06:12.838456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.719 [2024-11-03 10:06:12.838465] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.719 [2024-11-03 10:06:12.838472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.719 [2024-11-03 10:06:12.838480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.719 [2024-11-03 10:06:12.838486] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.719 [2024-11-03 10:06:12.838494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.719 [2024-11-03 10:06:12.838500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.979 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:44.979 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.979 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.979 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.979 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.979 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.979 10:06:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.979 10:06:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.979 10:06:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.979 [2024-11-03 10:06:13.337332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:44.979 [2024-11-03 10:06:13.338329] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.979 [2024-11-03 10:06:13.338361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.979 [2024-11-03 10:06:13.338371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.979 [2024-11-03 10:06:13.338382] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.979 [2024-11-03 10:06:13.338389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.979 [2024-11-03 10:06:13.338399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.979 [2024-11-03 10:06:13.338406] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.979 [2024-11-03 10:06:13.338414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.979 [2024-11-03 10:06:13.338421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.979 [2024-11-03 10:06:13.338428] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.979 [2024-11-03 10:06:13.338434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.979 [2024-11-03 10:06:13.338442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.252 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:45.252 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:45.521 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:45.521 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.521 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.521 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.521 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.521 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.521 10:06:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.521 10:06:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.521 10:06:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.781 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:45.781 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:45.781 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.781 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.781 10:06:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:45.781 10:06:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:45.781 10:06:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.781 10:06:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.781 10:06:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.781 10:06:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:45.781 10:06:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:45.781 10:06:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.781 10:06:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.69 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.69 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.69 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.69 2 00:11:58.074 remove_attach_helper took 45.69s to complete (handling 2 nvme drive(s)) 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:58.074 10:06:26 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:58.074 10:06:26 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.658 10:06:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.658 10:06:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.658 10:06:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:04.658 [2024-11-03 10:06:32.259858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:04.658 [2024-11-03 10:06:32.260660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.658 [2024-11-03 10:06:32.260770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.658 [2024-11-03 10:06:32.260787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.658 [2024-11-03 10:06:32.260800] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.658 [2024-11-03 10:06:32.260809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.658 [2024-11-03 10:06:32.260816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.658 [2024-11-03 10:06:32.260824] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.658 [2024-11-03 10:06:32.260831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.658 [2024-11-03 10:06:32.260841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.658 [2024-11-03 10:06:32.260847] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.658 [2024-11-03 10:06:32.260854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.658 [2024-11-03 10:06:32.260861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.658 10:06:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.658 10:06:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.658 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.658 10:06:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.658 [2024-11-03 10:06:32.759861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:04.658 [2024-11-03 10:06:32.760611] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.658 [2024-11-03 10:06:32.760642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.658 [2024-11-03 10:06:32.760651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.658 [2024-11-03 10:06:32.760663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.658 [2024-11-03 10:06:32.760670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.658 [2024-11-03 10:06:32.760678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.658 [2024-11-03 10:06:32.760685] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.659 [2024-11-03 10:06:32.760693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.659 [2024-11-03 10:06:32.760699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.659 [2024-11-03 10:06:32.760707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.659 [2024-11-03 10:06:32.760713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.659 [2024-11-03 10:06:32.760723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.659 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:04.659 10:06:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:05.230 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:05.230 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.231 10:06:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.231 10:06:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.231 10:06:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.231 10:06:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.466 10:06:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.466 10:06:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.466 10:06:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:17.466 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.467 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.467 10:06:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.467 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.467 10:06:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.467 10:06:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.467 [2024-11-03 10:06:45.660072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:17.467 [2024-11-03 10:06:45.660857] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.467 [2024-11-03 10:06:45.660881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.467 [2024-11-03 10:06:45.660892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.467 [2024-11-03 10:06:45.660903] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.467 [2024-11-03 10:06:45.660913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.467 [2024-11-03 10:06:45.660921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.467 [2024-11-03 10:06:45.660929] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.467 [2024-11-03 10:06:45.660935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.467 [2024-11-03 10:06:45.660943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.467 [2024-11-03 10:06:45.660949] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.467 [2024-11-03 10:06:45.660957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.467 [2024-11-03 10:06:45.660964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.467 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:17.467 10:06:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:17.728 [2024-11-03 10:06:46.060067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:17.728 [2024-11-03 10:06:46.060793] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.728 [2024-11-03 10:06:46.060826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.728 [2024-11-03 10:06:46.060835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.728 [2024-11-03 10:06:46.060846] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.728 [2024-11-03 10:06:46.060853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.728 [2024-11-03 10:06:46.060861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.728 [2024-11-03 10:06:46.060868] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.728 [2024-11-03 10:06:46.060875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.728 [2024-11-03 10:06:46.060882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.728 [2024-11-03 10:06:46.060889] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.728 [2024-11-03 10:06:46.060895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.728 [2024-11-03 10:06:46.060903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.989 10:06:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.989 10:06:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.989 10:06:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:17.989 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:18.250 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.250 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.250 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.250 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:18.250 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:18.250 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.250 10:06:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:30.482 10:06:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.482 10:06:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.482 10:06:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:30.482 10:06:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.482 10:06:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.482 10:06:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:30.482 10:06:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:30.482 [2024-11-03 10:06:58.560291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:30.482 [2024-11-03 10:06:58.561051] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.482 [2024-11-03 10:06:58.561069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.482 [2024-11-03 10:06:58.561080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.482 [2024-11-03 10:06:58.561091] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.482 [2024-11-03 10:06:58.561102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.482 [2024-11-03 10:06:58.561108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.482 [2024-11-03 10:06:58.561116] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.482 [2024-11-03 10:06:58.561123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.482 [2024-11-03 10:06:58.561131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.482 [2024-11-03 10:06:58.561137] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.482 [2024-11-03 10:06:58.561145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.482 [2024-11-03 10:06:58.561151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.743 [2024-11-03 10:06:58.960297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:30.743 [2024-11-03 10:06:58.961002] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.743 [2024-11-03 10:06:58.961030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.743 [2024-11-03 10:06:58.961040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.743 [2024-11-03 10:06:58.961050] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.743 [2024-11-03 10:06:58.961057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.743 [2024-11-03 10:06:58.961065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.743 [2024-11-03 10:06:58.961072] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.743 [2024-11-03 10:06:58.961082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.743 [2024-11-03 10:06:58.961088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.743 [2024-11-03 10:06:58.961095] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.743 [2024-11-03 10:06:58.961102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.743 [2024-11-03 10:06:58.961109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.743 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:30.743 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:30.743 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:30.743 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:30.743 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:30.743 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:30.743 10:06:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.743 10:06:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:30.743 10:06:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.743 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:30.743 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:31.004 10:06:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.18 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.18 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:12:43.284 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:43.284 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79068 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79068 ']' 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79068 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79068 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.284 killing process with pid 79068 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79068' 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79068 00:12:43.284 10:07:11 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79068 00:12:43.545 10:07:11 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:43.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:44.066 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:44.066 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:44.326 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:44.326 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:44.326 00:12:44.326 real 2m29.727s 00:12:44.326 user 1m49.543s 00:12:44.326 sys 0m18.587s 00:12:44.326 10:07:12 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.326 ************************************ 00:12:44.326 END TEST sw_hotplug 00:12:44.326 ************************************ 00:12:44.326 10:07:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.326 10:07:12 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:44.326 10:07:12 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:44.326 10:07:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:44.326 10:07:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.327 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:44.327 ************************************ 00:12:44.327 START TEST nvme_xnvme 00:12:44.327 ************************************ 00:12:44.327 10:07:12 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:44.588 * Looking for test storage... 00:12:44.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:44.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.588 --rc genhtml_branch_coverage=1 00:12:44.588 --rc genhtml_function_coverage=1 00:12:44.588 --rc genhtml_legend=1 00:12:44.588 --rc geninfo_all_blocks=1 00:12:44.588 --rc geninfo_unexecuted_blocks=1 00:12:44.588 00:12:44.588 ' 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:44.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.588 --rc genhtml_branch_coverage=1 00:12:44.588 --rc genhtml_function_coverage=1 00:12:44.588 --rc genhtml_legend=1 00:12:44.588 --rc geninfo_all_blocks=1 00:12:44.588 --rc geninfo_unexecuted_blocks=1 00:12:44.588 00:12:44.588 ' 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:44.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.588 --rc genhtml_branch_coverage=1 00:12:44.588 --rc genhtml_function_coverage=1 00:12:44.588 --rc genhtml_legend=1 00:12:44.588 --rc geninfo_all_blocks=1 00:12:44.588 --rc geninfo_unexecuted_blocks=1 00:12:44.588 00:12:44.588 ' 00:12:44.588 10:07:12 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:44.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.588 --rc genhtml_branch_coverage=1 00:12:44.588 --rc genhtml_function_coverage=1 00:12:44.588 --rc genhtml_legend=1 00:12:44.588 --rc geninfo_all_blocks=1 00:12:44.588 --rc geninfo_unexecuted_blocks=1 00:12:44.588 00:12:44.588 ' 00:12:44.588 10:07:12 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.588 10:07:12 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.588 10:07:12 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.588 10:07:12 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.588 10:07:12 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.589 10:07:12 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:44.589 10:07:12 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.589 10:07:12 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:44.589 10:07:12 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:44.589 10:07:12 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.589 10:07:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.589 ************************************ 00:12:44.589 START TEST xnvme_to_malloc_dd_copy 00:12:44.589 ************************************ 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:44.589 10:07:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:44.589 { 00:12:44.589 "subsystems": [ 00:12:44.589 { 00:12:44.589 "subsystem": "bdev", 00:12:44.589 "config": [ 00:12:44.589 { 00:12:44.589 "params": { 00:12:44.589 "block_size": 512, 00:12:44.589 "num_blocks": 2097152, 00:12:44.589 "name": "malloc0" 00:12:44.589 }, 00:12:44.589 "method": "bdev_malloc_create" 00:12:44.589 }, 00:12:44.589 { 00:12:44.589 "params": { 00:12:44.589 "io_mechanism": "libaio", 00:12:44.589 "filename": "/dev/nullb0", 00:12:44.589 "name": "null0" 00:12:44.589 }, 00:12:44.589 "method": "bdev_xnvme_create" 00:12:44.589 }, 00:12:44.589 { 00:12:44.589 "method": "bdev_wait_for_examine" 00:12:44.589 } 00:12:44.589 ] 00:12:44.589 } 00:12:44.589 ] 00:12:44.589 } 00:12:44.589 [2024-11-03 10:07:12.918397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:44.589 [2024-11-03 10:07:12.918535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80459 ] 00:12:44.850 [2024-11-03 10:07:13.056349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.850 [2024-11-03 10:07:13.106253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.237  [2024-11-03T10:07:15.542Z] Copying: 222/1024 [MB] (222 MBps) [2024-11-03T10:07:16.484Z] Copying: 446/1024 [MB] (224 MBps) [2024-11-03T10:07:17.868Z] Copying: 671/1024 [MB] (224 MBps) [2024-11-03T10:07:17.868Z] Copying: 951/1024 [MB] (279 MBps) [2024-11-03T10:07:18.129Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:12:49.767 00:12:49.767 10:07:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:49.767 10:07:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:49.767 10:07:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:49.767 10:07:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:49.767 { 00:12:49.767 "subsystems": [ 00:12:49.767 { 00:12:49.767 "subsystem": "bdev", 00:12:49.767 "config": [ 00:12:49.767 { 00:12:49.767 "params": { 00:12:49.767 "block_size": 512, 00:12:49.767 "num_blocks": 2097152, 00:12:49.767 "name": "malloc0" 00:12:49.767 }, 00:12:49.767 "method": "bdev_malloc_create" 00:12:49.767 }, 00:12:49.767 { 00:12:49.767 "params": { 00:12:49.767 "io_mechanism": "libaio", 00:12:49.767 "filename": "/dev/nullb0", 00:12:49.767 "name": "null0" 00:12:49.767 }, 00:12:49.767 "method": "bdev_xnvme_create" 00:12:49.767 }, 00:12:49.767 { 00:12:49.767 "method": "bdev_wait_for_examine" 00:12:49.767 } 00:12:49.767 ] 00:12:49.767 } 00:12:49.767 ] 00:12:49.767 } 00:12:49.767 [2024-11-03 10:07:18.096353] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:49.767 [2024-11-03 10:07:18.096588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80522 ] 00:12:50.028 [2024-11-03 10:07:18.229204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.028 [2024-11-03 10:07:18.259608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.413  [2024-11-03T10:07:20.718Z] Copying: 308/1024 [MB] (308 MBps) [2024-11-03T10:07:21.659Z] Copying: 616/1024 [MB] (307 MBps) [2024-11-03T10:07:21.920Z] Copying: 926/1024 [MB] (309 MBps) [2024-11-03T10:07:22.181Z] Copying: 1024/1024 [MB] (average 308 MBps) 00:12:53.819 00:12:53.819 10:07:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:53.819 10:07:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:53.819 10:07:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:53.819 10:07:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:53.819 10:07:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:53.819 10:07:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.819 { 00:12:53.819 "subsystems": [ 00:12:53.819 { 00:12:53.819 "subsystem": "bdev", 00:12:53.819 "config": [ 00:12:53.819 { 00:12:53.819 "params": { 00:12:53.819 "block_size": 512, 00:12:53.819 "num_blocks": 2097152, 00:12:53.819 "name": "malloc0" 00:12:53.819 }, 00:12:53.819 "method": "bdev_malloc_create" 00:12:53.819 }, 00:12:53.819 { 00:12:53.819 "params": { 00:12:53.819 "io_mechanism": "io_uring", 00:12:53.819 "filename": "/dev/nullb0", 00:12:53.819 "name": "null0" 00:12:53.819 }, 00:12:53.819 "method": "bdev_xnvme_create" 00:12:53.819 }, 00:12:53.819 { 00:12:53.819 "method": "bdev_wait_for_examine" 00:12:53.819 } 00:12:53.819 ] 00:12:53.819 } 00:12:53.819 ] 00:12:53.819 } 00:12:54.080 [2024-11-03 10:07:22.193291] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:54.080 [2024-11-03 10:07:22.193405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80577 ] 00:12:54.080 [2024-11-03 10:07:22.329291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.080 [2024-11-03 10:07:22.364483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.466  [2024-11-03T10:07:24.770Z] Copying: 314/1024 [MB] (314 MBps) [2024-11-03T10:07:25.712Z] Copying: 630/1024 [MB] (315 MBps) [2024-11-03T10:07:25.973Z] Copying: 945/1024 [MB] (314 MBps) [2024-11-03T10:07:26.234Z] Copying: 1024/1024 [MB] (average 315 MBps) 00:12:57.872 00:12:57.872 10:07:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:57.872 10:07:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:57.872 10:07:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:57.872 10:07:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:57.872 { 00:12:57.872 "subsystems": [ 00:12:57.872 { 00:12:57.872 "subsystem": "bdev", 00:12:57.872 "config": [ 00:12:57.872 { 00:12:57.872 "params": { 00:12:57.872 "block_size": 512, 00:12:57.872 "num_blocks": 2097152, 00:12:57.872 "name": "malloc0" 00:12:57.872 }, 00:12:57.872 "method": "bdev_malloc_create" 00:12:57.872 }, 00:12:57.872 { 00:12:57.872 "params": { 00:12:57.872 "io_mechanism": "io_uring", 00:12:57.872 "filename": "/dev/nullb0", 00:12:57.872 "name": "null0" 00:12:57.872 }, 00:12:57.872 "method": "bdev_xnvme_create" 00:12:57.872 }, 00:12:57.872 { 00:12:57.872 "method": "bdev_wait_for_examine" 00:12:57.872 } 00:12:57.872 ] 00:12:57.872 } 00:12:57.872 ] 00:12:57.872 } 00:12:57.872 [2024-11-03 10:07:26.207242] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:57.872 [2024-11-03 10:07:26.207356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80631 ] 00:12:58.133 [2024-11-03 10:07:26.341608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.133 [2024-11-03 10:07:26.375886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.529  [2024-11-03T10:07:28.858Z] Copying: 319/1024 [MB] (319 MBps) [2024-11-03T10:07:29.798Z] Copying: 639/1024 [MB] (320 MBps) [2024-11-03T10:07:29.798Z] Copying: 959/1024 [MB] (320 MBps) [2024-11-03T10:07:30.368Z] Copying: 1024/1024 [MB] (average 319 MBps) 00:13:02.006 00:13:02.006 10:07:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:02.006 10:07:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:02.006 ************************************ 00:13:02.006 END TEST xnvme_to_malloc_dd_copy 00:13:02.006 ************************************ 00:13:02.006 00:13:02.006 real 0m17.312s 00:13:02.006 user 0m14.374s 00:13:02.006 sys 0m2.424s 00:13:02.006 10:07:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.006 10:07:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:02.006 10:07:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:02.006 10:07:30 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:02.006 10:07:30 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.006 10:07:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.006 ************************************ 00:13:02.006 START TEST xnvme_bdevperf 00:13:02.006 ************************************ 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:02.006 10:07:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:02.006 { 00:13:02.006 "subsystems": [ 00:13:02.006 { 00:13:02.006 "subsystem": "bdev", 00:13:02.006 "config": [ 00:13:02.006 { 00:13:02.006 "params": { 00:13:02.006 "io_mechanism": "libaio", 00:13:02.006 "filename": "/dev/nullb0", 00:13:02.006 "name": "null0" 00:13:02.006 }, 00:13:02.006 "method": "bdev_xnvme_create" 00:13:02.006 }, 00:13:02.006 { 00:13:02.006 "method": "bdev_wait_for_examine" 00:13:02.006 } 00:13:02.006 ] 00:13:02.006 } 00:13:02.006 ] 00:13:02.006 } 00:13:02.006 [2024-11-03 10:07:30.291162] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:02.006 [2024-11-03 10:07:30.291293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80708 ] 00:13:02.266 [2024-11-03 10:07:30.426489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.266 [2024-11-03 10:07:30.461266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.266 Running I/O for 5 seconds... 00:13:04.595 205120.00 IOPS, 801.25 MiB/s [2024-11-03T10:07:33.906Z] 205408.00 IOPS, 802.38 MiB/s [2024-11-03T10:07:34.848Z] 205546.67 IOPS, 802.92 MiB/s [2024-11-03T10:07:35.789Z] 205600.00 IOPS, 803.12 MiB/s [2024-11-03T10:07:35.789Z] 205632.00 IOPS, 803.25 MiB/s 00:13:07.427 Latency(us) 00:13:07.427 [2024-11-03T10:07:35.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.427 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:07.427 null0 : 5.00 205573.14 803.02 0.00 0.00 309.15 304.05 1531.27 00:13:07.427 [2024-11-03T10:07:35.789Z] =================================================================================================================== 00:13:07.427 [2024-11-03T10:07:35.789Z] Total : 205573.14 803.02 0.00 0.00 309.15 304.05 1531.27 00:13:07.427 10:07:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:07.427 10:07:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:07.427 10:07:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:07.427 10:07:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:07.427 10:07:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.427 10:07:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.427 { 00:13:07.427 "subsystems": [ 00:13:07.427 { 00:13:07.427 "subsystem": "bdev", 00:13:07.427 "config": [ 00:13:07.427 { 00:13:07.427 "params": { 00:13:07.427 "io_mechanism": "io_uring", 00:13:07.427 "filename": "/dev/nullb0", 00:13:07.427 "name": "null0" 00:13:07.427 }, 00:13:07.427 "method": "bdev_xnvme_create" 00:13:07.427 }, 00:13:07.427 { 00:13:07.427 "method": "bdev_wait_for_examine" 00:13:07.427 } 00:13:07.427 ] 00:13:07.427 } 00:13:07.427 ] 00:13:07.427 } 00:13:07.427 [2024-11-03 10:07:35.750319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:07.427 [2024-11-03 10:07:35.750422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80772 ] 00:13:07.688 [2024-11-03 10:07:35.884091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.688 [2024-11-03 10:07:35.920552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.688 Running I/O for 5 seconds... 00:13:10.013 235392.00 IOPS, 919.50 MiB/s [2024-11-03T10:07:39.315Z] 235328.00 IOPS, 919.25 MiB/s [2024-11-03T10:07:40.255Z] 235285.33 IOPS, 919.08 MiB/s [2024-11-03T10:07:41.196Z] 235104.00 IOPS, 918.38 MiB/s 00:13:12.834 Latency(us) 00:13:12.834 [2024-11-03T10:07:41.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.834 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:12.834 null0 : 5.00 234988.37 917.92 0.00 0.00 269.97 151.24 2344.17 00:13:12.834 [2024-11-03T10:07:41.196Z] =================================================================================================================== 00:13:12.834 [2024-11-03T10:07:41.196Z] Total : 234988.37 917.92 0.00 0.00 269.97 151.24 2344.17 00:13:12.834 10:07:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:12.834 10:07:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:12.834 ************************************ 00:13:12.834 END TEST xnvme_bdevperf 00:13:12.834 ************************************ 00:13:12.834 00:13:12.834 real 0m10.970s 00:13:12.834 user 0m8.615s 00:13:12.834 sys 0m2.102s 00:13:12.834 10:07:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.834 10:07:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.095 ************************************ 00:13:13.095 END TEST nvme_xnvme 00:13:13.095 ************************************ 00:13:13.095 00:13:13.095 real 0m28.558s 00:13:13.095 user 0m23.114s 00:13:13.095 sys 0m4.640s 00:13:13.095 10:07:41 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.095 10:07:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.095 10:07:41 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:13.095 10:07:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.095 10:07:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.095 10:07:41 -- common/autotest_common.sh@10 -- # set +x 00:13:13.095 ************************************ 00:13:13.095 START TEST blockdev_xnvme 00:13:13.095 ************************************ 00:13:13.095 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:13.095 * Looking for test storage... 00:13:13.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:13.095 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:13.095 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:13.095 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:13.095 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.095 10:07:41 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:13.095 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.095 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:13.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.095 --rc genhtml_branch_coverage=1 00:13:13.095 --rc genhtml_function_coverage=1 00:13:13.095 --rc genhtml_legend=1 00:13:13.095 --rc geninfo_all_blocks=1 00:13:13.095 --rc geninfo_unexecuted_blocks=1 00:13:13.095 00:13:13.095 ' 00:13:13.095 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:13.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.095 --rc genhtml_branch_coverage=1 00:13:13.095 --rc genhtml_function_coverage=1 00:13:13.096 --rc genhtml_legend=1 00:13:13.096 --rc geninfo_all_blocks=1 00:13:13.096 --rc geninfo_unexecuted_blocks=1 00:13:13.096 00:13:13.096 ' 00:13:13.096 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:13.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.096 --rc genhtml_branch_coverage=1 00:13:13.096 --rc genhtml_function_coverage=1 00:13:13.096 --rc genhtml_legend=1 00:13:13.096 --rc geninfo_all_blocks=1 00:13:13.096 --rc geninfo_unexecuted_blocks=1 00:13:13.096 00:13:13.096 ' 00:13:13.096 10:07:41 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:13.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.096 --rc genhtml_branch_coverage=1 00:13:13.096 --rc genhtml_function_coverage=1 00:13:13.096 --rc genhtml_legend=1 00:13:13.096 --rc geninfo_all_blocks=1 00:13:13.096 --rc geninfo_unexecuted_blocks=1 00:13:13.096 00:13:13.096 ' 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80909 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:13.096 10:07:41 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80909 00:13:13.096 10:07:41 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 80909 ']' 00:13:13.096 10:07:41 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.096 10:07:41 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.096 10:07:41 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.096 10:07:41 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.096 10:07:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.356 [2024-11-03 10:07:41.492834] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:13.356 [2024-11-03 10:07:41.493154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80909 ] 00:13:13.356 [2024-11-03 10:07:41.627884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.356 [2024-11-03 10:07:41.666186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.298 10:07:42 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.298 10:07:42 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:14.298 10:07:42 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:14.298 10:07:42 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:14.298 10:07:42 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:14.298 10:07:42 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:14.298 10:07:42 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:14.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.560 Waiting for block devices as requested 00:13:14.560 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.560 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.560 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.820 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.111 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:20.111 nvme0n1 00:13:20.111 nvme1n1 00:13:20.111 nvme2n1 00:13:20.111 nvme2n2 00:13:20.111 nvme2n3 00:13:20.111 nvme3n1 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.111 10:07:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:20.111 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:20.112 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f359f997-512c-4e45-a374-9ebe23f5ade4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f359f997-512c-4e45-a374-9ebe23f5ade4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "684a29eb-1381-4c4a-a096-5c3bf30f2478"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "684a29eb-1381-4c4a-a096-5c3bf30f2478",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "053ae396-0a9a-44fc-b182-ba8c14fb5549"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "053ae396-0a9a-44fc-b182-ba8c14fb5549",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "bd2d7366-d305-4132-be5c-01b964908ac2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bd2d7366-d305-4132-be5c-01b964908ac2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8e9c5143-e32e-4ab9-8dfc-d908614240d5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e9c5143-e32e-4ab9-8dfc-d908614240d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0945ee2f-04f3-40eb-8c65-43b38f87b27a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0945ee2f-04f3-40eb-8c65-43b38f87b27a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:20.112 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:20.112 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:20.112 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:20.112 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80909 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 80909 ']' 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 80909 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80909 00:13:20.112 killing process with pid 80909 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80909' 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 80909 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 80909 00:13:20.112 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:20.112 10:07:48 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.112 10:07:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.373 ************************************ 00:13:20.373 START TEST bdev_hello_world 00:13:20.373 ************************************ 00:13:20.373 10:07:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:20.373 [2024-11-03 10:07:48.530624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:20.373 [2024-11-03 10:07:48.530746] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81256 ] 00:13:20.373 [2024-11-03 10:07:48.665997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.373 [2024-11-03 10:07:48.695106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.634 [2024-11-03 10:07:48.850983] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:20.634 [2024-11-03 10:07:48.851141] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:20.634 [2024-11-03 10:07:48.851163] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:20.634 [2024-11-03 10:07:48.852730] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:20.634 [2024-11-03 10:07:48.852922] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:20.634 [2024-11-03 10:07:48.852933] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:20.634 [2024-11-03 10:07:48.853108] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:20.634 00:13:20.634 [2024-11-03 10:07:48.853128] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:20.634 00:13:20.634 real 0m0.504s 00:13:20.634 user 0m0.269s 00:13:20.634 sys 0m0.128s 00:13:20.634 ************************************ 00:13:20.634 END TEST bdev_hello_world 00:13:20.634 ************************************ 00:13:20.634 10:07:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.634 10:07:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:20.895 10:07:49 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:20.895 10:07:49 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.895 10:07:49 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.895 10:07:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.895 ************************************ 00:13:20.895 START TEST bdev_bounds 00:13:20.895 ************************************ 00:13:20.895 Process bdevio pid: 81276 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81276 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81276' 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81276 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81276 ']' 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.895 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:20.895 [2024-11-03 10:07:49.094449] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:20.895 [2024-11-03 10:07:49.094678] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81276 ] 00:13:20.896 [2024-11-03 10:07:49.227797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.159 [2024-11-03 10:07:49.258573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.159 [2024-11-03 10:07:49.258787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.159 [2024-11-03 10:07:49.258795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.788 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.788 10:07:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:21.788 10:07:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:21.788 I/O targets: 00:13:21.788 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:21.788 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:21.788 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.788 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.788 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.788 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:21.788 00:13:21.788 00:13:21.788 CUnit - A unit testing framework for C - Version 2.1-3 00:13:21.788 http://cunit.sourceforge.net/ 00:13:21.788 00:13:21.788 00:13:21.788 Suite: bdevio tests on: nvme3n1 00:13:21.788 Test: blockdev write read block ...passed 00:13:21.788 Test: blockdev write zeroes read block ...passed 00:13:21.788 Test: blockdev write zeroes read no split ...passed 00:13:21.789 Test: blockdev write zeroes read split ...passed 00:13:21.789 Test: blockdev write zeroes read split partial ...passed 00:13:21.789 Test: blockdev reset ...passed 00:13:21.789 Test: blockdev write read 8 blocks ...passed 00:13:21.789 Test: blockdev write read size > 128k ...passed 00:13:21.789 Test: blockdev write read invalid size ...passed 00:13:21.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.789 Test: blockdev write read max offset ...passed 00:13:21.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.789 Test: blockdev writev readv 8 blocks ...passed 00:13:21.789 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.789 Test: blockdev writev readv block ...passed 00:13:21.789 Test: blockdev writev readv size > 128k ...passed 00:13:21.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.789 Test: blockdev comparev and writev ...passed 00:13:21.789 Test: blockdev nvme passthru rw ...passed 00:13:21.789 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.789 Test: blockdev nvme admin passthru ...passed 00:13:21.789 Test: blockdev copy ...passed 00:13:21.789 Suite: bdevio tests on: nvme2n3 00:13:21.789 Test: blockdev write read block ...passed 00:13:21.789 Test: blockdev write zeroes read block ...passed 00:13:21.789 Test: blockdev write zeroes read no split ...passed 00:13:21.789 Test: blockdev write zeroes read split ...passed 00:13:21.789 Test: blockdev write zeroes read split partial ...passed 00:13:21.789 Test: blockdev reset ...passed 00:13:21.789 Test: blockdev write read 8 blocks ...passed 00:13:21.789 Test: blockdev write read size > 128k ...passed 00:13:21.789 Test: blockdev write read invalid size ...passed 00:13:21.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.789 Test: blockdev write read max offset ...passed 00:13:21.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.789 Test: blockdev writev readv 8 blocks ...passed 00:13:21.789 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.789 Test: blockdev writev readv block ...passed 00:13:21.789 Test: blockdev writev readv size > 128k ...passed 00:13:21.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.789 Test: blockdev comparev and writev ...passed 00:13:21.789 Test: blockdev nvme passthru rw ...passed 00:13:21.789 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.789 Test: blockdev nvme admin passthru ...passed 00:13:21.789 Test: blockdev copy ...passed 00:13:21.789 Suite: bdevio tests on: nvme2n2 00:13:21.789 Test: blockdev write read block ...passed 00:13:21.789 Test: blockdev write zeroes read block ...passed 00:13:21.789 Test: blockdev write zeroes read no split ...passed 00:13:21.789 Test: blockdev write zeroes read split ...passed 00:13:21.789 Test: blockdev write zeroes read split partial ...passed 00:13:21.789 Test: blockdev reset ...passed 00:13:21.789 Test: blockdev write read 8 blocks ...passed 00:13:21.789 Test: blockdev write read size > 128k ...passed 00:13:21.789 Test: blockdev write read invalid size ...passed 00:13:21.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.789 Test: blockdev write read max offset ...passed 00:13:21.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.789 Test: blockdev writev readv 8 blocks ...passed 00:13:21.789 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.789 Test: blockdev writev readv block ...passed 00:13:21.789 Test: blockdev writev readv size > 128k ...passed 00:13:21.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.789 Test: blockdev comparev and writev ...passed 00:13:21.789 Test: blockdev nvme passthru rw ...passed 00:13:21.789 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.789 Test: blockdev nvme admin passthru ...passed 00:13:21.789 Test: blockdev copy ...passed 00:13:21.789 Suite: bdevio tests on: nvme2n1 00:13:21.789 Test: blockdev write read block ...passed 00:13:21.789 Test: blockdev write zeroes read block ...passed 00:13:21.789 Test: blockdev write zeroes read no split ...passed 00:13:21.789 Test: blockdev write zeroes read split ...passed 00:13:22.050 Test: blockdev write zeroes read split partial ...passed 00:13:22.050 Test: blockdev reset ...passed 00:13:22.050 Test: blockdev write read 8 blocks ...passed 00:13:22.050 Test: blockdev write read size > 128k ...passed 00:13:22.050 Test: blockdev write read invalid size ...passed 00:13:22.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.050 Test: blockdev write read max offset ...passed 00:13:22.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.050 Test: blockdev writev readv 8 blocks ...passed 00:13:22.050 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.050 Test: blockdev writev readv block ...passed 00:13:22.050 Test: blockdev writev readv size > 128k ...passed 00:13:22.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.050 Test: blockdev comparev and writev ...passed 00:13:22.050 Test: blockdev nvme passthru rw ...passed 00:13:22.050 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.050 Test: blockdev nvme admin passthru ...passed 00:13:22.050 Test: blockdev copy ...passed 00:13:22.050 Suite: bdevio tests on: nvme1n1 00:13:22.050 Test: blockdev write read block ...passed 00:13:22.050 Test: blockdev write zeroes read block ...passed 00:13:22.050 Test: blockdev write zeroes read no split ...passed 00:13:22.050 Test: blockdev write zeroes read split ...passed 00:13:22.050 Test: blockdev write zeroes read split partial ...passed 00:13:22.050 Test: blockdev reset ...passed 00:13:22.050 Test: blockdev write read 8 blocks ...passed 00:13:22.050 Test: blockdev write read size > 128k ...passed 00:13:22.050 Test: blockdev write read invalid size ...passed 00:13:22.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.050 Test: blockdev write read max offset ...passed 00:13:22.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.050 Test: blockdev writev readv 8 blocks ...passed 00:13:22.050 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.050 Test: blockdev writev readv block ...passed 00:13:22.050 Test: blockdev writev readv size > 128k ...passed 00:13:22.051 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.051 Test: blockdev comparev and writev ...passed 00:13:22.051 Test: blockdev nvme passthru rw ...passed 00:13:22.051 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.051 Test: blockdev nvme admin passthru ...passed 00:13:22.051 Test: blockdev copy ...passed 00:13:22.051 Suite: bdevio tests on: nvme0n1 00:13:22.051 Test: blockdev write read block ...passed 00:13:22.051 Test: blockdev write zeroes read block ...passed 00:13:22.051 Test: blockdev write zeroes read no split ...passed 00:13:22.051 Test: blockdev write zeroes read split ...passed 00:13:22.051 Test: blockdev write zeroes read split partial ...passed 00:13:22.051 Test: blockdev reset ...passed 00:13:22.051 Test: blockdev write read 8 blocks ...passed 00:13:22.051 Test: blockdev write read size > 128k ...passed 00:13:22.051 Test: blockdev write read invalid size ...passed 00:13:22.051 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.051 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.051 Test: blockdev write read max offset ...passed 00:13:22.051 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.051 Test: blockdev writev readv 8 blocks ...passed 00:13:22.051 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.051 Test: blockdev writev readv block ...passed 00:13:22.051 Test: blockdev writev readv size > 128k ...passed 00:13:22.051 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.051 Test: blockdev comparev and writev ...passed 00:13:22.051 Test: blockdev nvme passthru rw ...passed 00:13:22.051 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.051 Test: blockdev nvme admin passthru ...passed 00:13:22.051 Test: blockdev copy ...passed 00:13:22.051 00:13:22.051 Run Summary: Type Total Ran Passed Failed Inactive 00:13:22.051 suites 6 6 n/a 0 0 00:13:22.051 tests 138 138 138 0 0 00:13:22.051 asserts 780 780 780 0 n/a 00:13:22.051 00:13:22.051 Elapsed time = 0.588 seconds 00:13:22.051 0 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81276 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81276 ']' 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81276 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81276 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81276' 00:13:22.051 killing process with pid 81276 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81276 00:13:22.051 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81276 00:13:22.312 10:07:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:22.312 00:13:22.312 real 0m1.575s 00:13:22.312 user 0m3.830s 00:13:22.312 sys 0m0.275s 00:13:22.312 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.312 10:07:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:22.312 ************************************ 00:13:22.312 END TEST bdev_bounds 00:13:22.312 ************************************ 00:13:22.312 10:07:50 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:22.312 10:07:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:22.312 10:07:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.312 10:07:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.573 ************************************ 00:13:22.573 START TEST bdev_nbd 00:13:22.573 ************************************ 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81334 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81334 /var/tmp/spdk-nbd.sock 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81334 ']' 00:13:22.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:22.573 10:07:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:22.573 [2024-11-03 10:07:50.761089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:22.573 [2024-11-03 10:07:50.761567] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.573 [2024-11-03 10:07:50.898104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.833 [2024-11-03 10:07:50.974033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.404 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.665 1+0 records in 00:13:23.665 1+0 records out 00:13:23.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00146484 s, 2.8 MB/s 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.665 10:07:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.926 1+0 records in 00:13:23.926 1+0 records out 00:13:23.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001365 s, 3.0 MB/s 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:23.926 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.927 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.927 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:23.927 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.927 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.927 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.188 1+0 records in 00:13:24.188 1+0 records out 00:13:24.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00152252 s, 2.7 MB/s 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.188 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.449 1+0 records in 00:13:24.449 1+0 records out 00:13:24.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977495 s, 4.2 MB/s 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.449 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.710 1+0 records in 00:13:24.710 1+0 records out 00:13:24.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078631 s, 5.2 MB/s 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.710 10:07:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.972 1+0 records in 00:13:24.972 1+0 records out 00:13:24.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115393 s, 3.5 MB/s 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.972 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd0", 00:13:25.233 "bdev_name": "nvme0n1" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd1", 00:13:25.233 "bdev_name": "nvme1n1" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd2", 00:13:25.233 "bdev_name": "nvme2n1" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd3", 00:13:25.233 "bdev_name": "nvme2n2" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd4", 00:13:25.233 "bdev_name": "nvme2n3" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd5", 00:13:25.233 "bdev_name": "nvme3n1" 00:13:25.233 } 00:13:25.233 ]' 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd0", 00:13:25.233 "bdev_name": "nvme0n1" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd1", 00:13:25.233 "bdev_name": "nvme1n1" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd2", 00:13:25.233 "bdev_name": "nvme2n1" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd3", 00:13:25.233 "bdev_name": "nvme2n2" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd4", 00:13:25.233 "bdev_name": "nvme2n3" 00:13:25.233 }, 00:13:25.233 { 00:13:25.233 "nbd_device": "/dev/nbd5", 00:13:25.233 "bdev_name": "nvme3n1" 00:13:25.233 } 00:13:25.233 ]' 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.233 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.495 10:07:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.756 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.017 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.279 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.541 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.802 10:07:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:26.802 /dev/nbd0 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.063 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.063 1+0 records in 00:13:27.063 1+0 records out 00:13:27.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935105 s, 4.4 MB/s 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:27.064 /dev/nbd1 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.064 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.325 1+0 records in 00:13:27.325 1+0 records out 00:13:27.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113532 s, 3.6 MB/s 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:27.325 /dev/nbd10 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:27.325 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.326 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.326 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.326 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.326 1+0 records in 00:13:27.326 1+0 records out 00:13:27.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903163 s, 4.5 MB/s 00:13:27.326 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.326 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.326 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:27.587 /dev/nbd11 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.587 1+0 records in 00:13:27.587 1+0 records out 00:13:27.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000984259 s, 4.2 MB/s 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.587 10:07:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:27.849 /dev/nbd12 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.849 1+0 records in 00:13:27.849 1+0 records out 00:13:27.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00171867 s, 2.4 MB/s 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.849 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:28.110 /dev/nbd13 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.110 1+0 records in 00:13:28.110 1+0 records out 00:13:28.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108176 s, 3.8 MB/s 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.110 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd0", 00:13:28.371 "bdev_name": "nvme0n1" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd1", 00:13:28.371 "bdev_name": "nvme1n1" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd10", 00:13:28.371 "bdev_name": "nvme2n1" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd11", 00:13:28.371 "bdev_name": "nvme2n2" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd12", 00:13:28.371 "bdev_name": "nvme2n3" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd13", 00:13:28.371 "bdev_name": "nvme3n1" 00:13:28.371 } 00:13:28.371 ]' 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd0", 00:13:28.371 "bdev_name": "nvme0n1" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd1", 00:13:28.371 "bdev_name": "nvme1n1" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd10", 00:13:28.371 "bdev_name": "nvme2n1" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd11", 00:13:28.371 "bdev_name": "nvme2n2" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd12", 00:13:28.371 "bdev_name": "nvme2n3" 00:13:28.371 }, 00:13:28.371 { 00:13:28.371 "nbd_device": "/dev/nbd13", 00:13:28.371 "bdev_name": "nvme3n1" 00:13:28.371 } 00:13:28.371 ]' 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:28.371 /dev/nbd1 00:13:28.371 /dev/nbd10 00:13:28.371 /dev/nbd11 00:13:28.371 /dev/nbd12 00:13:28.371 /dev/nbd13' 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:28.371 /dev/nbd1 00:13:28.371 /dev/nbd10 00:13:28.371 /dev/nbd11 00:13:28.371 /dev/nbd12 00:13:28.371 /dev/nbd13' 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:28.371 256+0 records in 00:13:28.371 256+0 records out 00:13:28.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998791 s, 105 MB/s 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.371 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:28.632 256+0 records in 00:13:28.632 256+0 records out 00:13:28.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229685 s, 4.6 MB/s 00:13:28.632 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.632 10:07:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:28.893 256+0 records in 00:13:28.893 256+0 records out 00:13:28.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245267 s, 4.3 MB/s 00:13:28.893 10:07:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.893 10:07:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:29.153 256+0 records in 00:13:29.153 256+0 records out 00:13:29.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207085 s, 5.1 MB/s 00:13:29.153 10:07:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.153 10:07:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:29.412 256+0 records in 00:13:29.412 256+0 records out 00:13:29.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23674 s, 4.4 MB/s 00:13:29.412 10:07:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.412 10:07:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:29.672 256+0 records in 00:13:29.672 256+0 records out 00:13:29.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22775 s, 4.6 MB/s 00:13:29.672 10:07:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.672 10:07:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:29.933 256+0 records in 00:13:29.933 256+0 records out 00:13:29.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236205 s, 4.4 MB/s 00:13:29.933 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:29.933 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.933 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.934 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.195 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.454 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.712 10:07:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:30.712 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.969 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.227 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:31.485 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:31.743 malloc_lvol_verify 00:13:31.743 10:07:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:32.001 727e6316-8e59-44f5-9e45-6ae2ca17de46 00:13:32.001 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:32.260 7b032c35-0bd3-429c-bedb-ca010b92ce7e 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:32.260 /dev/nbd0 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:32.260 mke2fs 1.47.0 (5-Feb-2023) 00:13:32.260 Discarding device blocks: 0/4096 done 00:13:32.260 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:32.260 00:13:32.260 Allocating group tables: 0/1 done 00:13:32.260 Writing inode tables: 0/1 done 00:13:32.260 Creating journal (1024 blocks): done 00:13:32.260 Writing superblocks and filesystem accounting information: 0/1 done 00:13:32.260 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.260 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81334 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81334 ']' 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81334 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81334 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.518 killing process with pid 81334 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81334' 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81334 00:13:32.518 10:08:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81334 00:13:32.781 10:08:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:32.781 00:13:32.781 real 0m10.350s 00:13:32.781 user 0m14.000s 00:13:32.781 sys 0m3.811s 00:13:32.781 10:08:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.781 ************************************ 00:13:32.781 END TEST bdev_nbd 00:13:32.781 ************************************ 00:13:32.781 10:08:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:32.781 10:08:01 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:32.781 10:08:01 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:32.781 10:08:01 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:32.781 10:08:01 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:32.781 10:08:01 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.781 10:08:01 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.781 10:08:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.781 ************************************ 00:13:32.781 START TEST bdev_fio 00:13:32.781 ************************************ 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:32.782 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:32.782 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:32.783 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:32.783 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:32.783 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:32.783 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:32.783 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:33.045 ************************************ 00:13:33.045 START TEST bdev_fio_rw_verify 00:13:33.045 ************************************ 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:33.045 10:08:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.045 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.045 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.045 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.045 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.045 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.045 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.045 fio-3.35 00:13:33.045 Starting 6 threads 00:13:45.280 00:13:45.280 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81736: Sun Nov 3 10:08:11 2024 00:13:45.280 read: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(491MiB/10003msec) 00:13:45.280 slat (usec): min=2, max=1586, avg= 6.72, stdev=14.01 00:13:45.280 clat (usec): min=75, max=7463, avg=1608.40, stdev=805.52 00:13:45.280 lat (usec): min=78, max=7477, avg=1615.13, stdev=806.18 00:13:45.280 clat percentiles (usec): 00:13:45.280 | 50.000th=[ 1500], 99.000th=[ 4080], 99.900th=[ 5407], 99.990th=[ 6849], 00:13:45.280 | 99.999th=[ 7439] 00:13:45.280 write: IOPS=12.9k, BW=50.3MiB/s (52.7MB/s)(503MiB/10003msec); 0 zone resets 00:13:45.280 slat (usec): min=10, max=4665, avg=41.13, stdev=147.73 00:13:45.280 clat (usec): min=102, max=15677, avg=1826.84, stdev=889.23 00:13:45.280 lat (usec): min=123, max=15725, avg=1867.97, stdev=902.25 00:13:45.280 clat percentiles (usec): 00:13:45.280 | 50.000th=[ 1696], 99.000th=[ 4555], 99.900th=[ 6194], 99.990th=[ 8094], 00:13:45.280 | 99.999th=[15664] 00:13:45.280 bw ( KiB/s): min=43807, max=67304, per=100.00%, avg=51557.53, stdev=1031.63, samples=114 00:13:45.280 iops : min=10949, max=16826, avg=12889.00, stdev=257.93, samples=114 00:13:45.280 lat (usec) : 100=0.01%, 250=0.69%, 500=3.16%, 750=5.99%, 1000=9.19% 00:13:45.280 lat (msec) : 2=49.80%, 4=29.37%, 10=1.78%, 20=0.01% 00:13:45.280 cpu : usr=47.58%, sys=30.02%, ctx=4804, majf=0, minf=15205 00:13:45.280 IO depths : 1=11.5%, 2=24.0%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.280 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.280 issued rwts: total=125732,128707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.280 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:45.280 00:13:45.280 Run status group 0 (all jobs): 00:13:45.280 READ: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=491MiB (515MB), run=10003-10003msec 00:13:45.280 WRITE: bw=50.3MiB/s (52.7MB/s), 50.3MiB/s-50.3MiB/s (52.7MB/s-52.7MB/s), io=503MiB (527MB), run=10003-10003msec 00:13:45.280 ----------------------------------------------------- 00:13:45.280 Suppressions used: 00:13:45.280 count bytes template 00:13:45.280 6 48 /usr/src/fio/parse.c 00:13:45.280 2898 278208 /usr/src/fio/iolog.c 00:13:45.280 1 8 libtcmalloc_minimal.so 00:13:45.280 1 904 libcrypto.so 00:13:45.280 ----------------------------------------------------- 00:13:45.280 00:13:45.280 00:13:45.280 real 0m11.140s 00:13:45.280 user 0m29.288s 00:13:45.280 sys 0m18.297s 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.280 ************************************ 00:13:45.280 END TEST bdev_fio_rw_verify 00:13:45.280 ************************************ 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:45.280 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f359f997-512c-4e45-a374-9ebe23f5ade4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f359f997-512c-4e45-a374-9ebe23f5ade4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "684a29eb-1381-4c4a-a096-5c3bf30f2478"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "684a29eb-1381-4c4a-a096-5c3bf30f2478",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "053ae396-0a9a-44fc-b182-ba8c14fb5549"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "053ae396-0a9a-44fc-b182-ba8c14fb5549",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "bd2d7366-d305-4132-be5c-01b964908ac2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bd2d7366-d305-4132-be5c-01b964908ac2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8e9c5143-e32e-4ab9-8dfc-d908614240d5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e9c5143-e32e-4ab9-8dfc-d908614240d5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0945ee2f-04f3-40eb-8c65-43b38f87b27a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0945ee2f-04f3-40eb-8c65-43b38f87b27a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:45.281 /home/vagrant/spdk_repo/spdk 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:45.281 00:13:45.281 real 0m11.307s 00:13:45.281 user 0m29.363s 00:13:45.281 sys 0m18.369s 00:13:45.281 ************************************ 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.281 10:08:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:45.281 END TEST bdev_fio 00:13:45.281 ************************************ 00:13:45.281 10:08:12 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:45.281 10:08:12 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:45.281 10:08:12 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:45.281 10:08:12 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.281 10:08:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.281 ************************************ 00:13:45.281 START TEST bdev_verify 00:13:45.281 ************************************ 00:13:45.281 10:08:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:45.281 [2024-11-03 10:08:12.533725] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:45.281 [2024-11-03 10:08:12.533876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81902 ] 00:13:45.281 [2024-11-03 10:08:12.670472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:45.281 [2024-11-03 10:08:12.743530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.281 [2024-11-03 10:08:12.743610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.281 Running I/O for 5 seconds... 00:13:47.224 23264.00 IOPS, 90.88 MiB/s [2024-11-03T10:08:16.532Z] 23392.00 IOPS, 91.38 MiB/s [2024-11-03T10:08:17.477Z] 23189.33 IOPS, 90.58 MiB/s [2024-11-03T10:08:18.423Z] 23328.00 IOPS, 91.12 MiB/s [2024-11-03T10:08:18.423Z] 23308.80 IOPS, 91.05 MiB/s 00:13:50.061 Latency(us) 00:13:50.061 [2024-11-03T10:08:18.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.061 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.061 Verification LBA range: start 0x0 length 0xa0000 00:13:50.061 nvme0n1 : 5.04 1879.05 7.34 0.00 0.00 68009.46 12653.49 65334.35 00:13:50.061 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.061 Verification LBA range: start 0xa0000 length 0xa0000 00:13:50.061 nvme0n1 : 5.06 1794.42 7.01 0.00 0.00 71190.47 6604.01 78643.20 00:13:50.061 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.061 Verification LBA range: start 0x0 length 0xbd0bd 00:13:50.061 nvme1n1 : 5.05 2327.32 9.09 0.00 0.00 54709.16 7360.20 64124.46 00:13:50.061 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.061 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:50.061 nvme1n1 : 5.06 2271.06 8.87 0.00 0.00 56093.47 5368.91 56865.08 00:13:50.061 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.061 Verification LBA range: start 0x0 length 0x80000 00:13:50.061 nvme2n1 : 5.03 1908.50 7.46 0.00 0.00 66730.90 13006.38 60494.77 00:13:50.061 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.061 Verification LBA range: start 0x80000 length 0x80000 00:13:50.061 nvme2n1 : 5.06 1846.66 7.21 0.00 0.00 68801.96 4965.61 63721.16 00:13:50.061 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.061 Verification LBA range: start 0x0 length 0x80000 00:13:50.062 nvme2n2 : 5.03 1882.38 7.35 0.00 0.00 67538.84 12855.14 63317.86 00:13:50.062 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.062 Verification LBA range: start 0x80000 length 0x80000 00:13:50.062 nvme2n2 : 5.08 1789.48 6.99 0.00 0.00 70845.14 8166.79 66544.25 00:13:50.062 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.062 Verification LBA range: start 0x0 length 0x80000 00:13:50.062 nvme2n3 : 5.07 1919.47 7.50 0.00 0.00 66126.88 5847.83 66947.54 00:13:50.062 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.062 Verification LBA range: start 0x80000 length 0x80000 00:13:50.062 nvme2n3 : 5.07 1791.38 7.00 0.00 0.00 70652.78 10536.17 80256.39 00:13:50.062 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.062 Verification LBA range: start 0x0 length 0x20000 00:13:50.062 nvme3n1 : 5.06 1895.66 7.40 0.00 0.00 66842.58 5721.80 66140.95 00:13:50.062 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.062 Verification LBA range: start 0x20000 length 0x20000 00:13:50.062 nvme3n1 : 5.07 1790.84 7.00 0.00 0.00 70630.98 7410.61 72190.42 00:13:50.062 [2024-11-03T10:08:18.424Z] =================================================================================================================== 00:13:50.062 [2024-11-03T10:08:18.424Z] Total : 23096.23 90.22 0.00 0.00 66045.37 4965.61 80256.39 00:13:50.323 00:13:50.323 real 0m6.024s 00:13:50.323 user 0m9.582s 00:13:50.323 sys 0m1.514s 00:13:50.323 10:08:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.324 ************************************ 00:13:50.324 END TEST bdev_verify 00:13:50.324 10:08:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:50.324 ************************************ 00:13:50.324 10:08:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:50.324 10:08:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:50.324 10:08:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.324 10:08:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.324 ************************************ 00:13:50.324 START TEST bdev_verify_big_io 00:13:50.324 ************************************ 00:13:50.324 10:08:18 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:50.324 [2024-11-03 10:08:18.627854] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:50.324 [2024-11-03 10:08:18.627993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81992 ] 00:13:50.586 [2024-11-03 10:08:18.765786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:50.586 [2024-11-03 10:08:18.837124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.586 [2024-11-03 10:08:18.837213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.848 Running I/O for 5 seconds... 00:13:56.951 1680.00 IOPS, 105.00 MiB/s [2024-11-03T10:08:25.573Z] 3440.00 IOPS, 215.00 MiB/s 00:13:57.211 Latency(us) 00:13:57.211 [2024-11-03T10:08:25.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.211 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x0 length 0xa000 00:13:57.211 nvme0n1 : 5.67 157.95 9.87 0.00 0.00 780814.63 138734.67 851766.35 00:13:57.211 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0xa000 length 0xa000 00:13:57.211 nvme0n1 : 5.83 98.86 6.18 0.00 0.00 1248980.33 101227.91 1548666.09 00:13:57.211 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x0 length 0xbd0b 00:13:57.211 nvme1n1 : 5.68 214.11 13.38 0.00 0.00 572138.94 16636.06 771106.66 00:13:57.211 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:57.211 nvme1n1 : 5.83 126.12 7.88 0.00 0.00 927749.02 42749.64 1200216.22 00:13:57.211 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x0 length 0x8000 00:13:57.211 nvme2n1 : 5.68 170.35 10.65 0.00 0.00 698733.71 6049.48 729163.62 00:13:57.211 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x8000 length 0x8000 00:13:57.211 nvme2n1 : 5.90 75.97 4.75 0.00 0.00 1462740.11 103244.41 1703532.70 00:13:57.211 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x0 length 0x8000 00:13:57.211 nvme2n2 : 5.69 143.40 8.96 0.00 0.00 810535.51 78239.90 706578.90 00:13:57.211 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x8000 length 0x8000 00:13:57.211 nvme2n2 : 5.99 128.17 8.01 0.00 0.00 839341.36 6906.49 1058255.16 00:13:57.211 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x0 length 0x8000 00:13:57.211 nvme2n3 : 5.69 191.29 11.96 0.00 0.00 596531.48 9578.34 545259.52 00:13:57.211 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x8000 length 0x8000 00:13:57.211 nvme2n3 : 6.09 133.89 8.37 0.00 0.00 776116.69 10233.70 3484498.71 00:13:57.211 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x0 length 0x2000 00:13:57.211 nvme3n1 : 5.69 162.98 10.19 0.00 0.00 682860.02 7813.91 1858399.31 00:13:57.211 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.211 Verification LBA range: start 0x2000 length 0x2000 00:13:57.211 nvme3n1 : 6.24 194.91 12.18 0.00 0.00 514541.76 699.47 2271376.94 00:13:57.211 [2024-11-03T10:08:25.573Z] =================================================================================================================== 00:13:57.211 [2024-11-03T10:08:25.573Z] Total : 1798.00 112.37 0.00 0.00 761703.67 699.47 3484498.71 00:13:57.471 00:13:57.471 real 0m7.058s 00:13:57.471 user 0m12.931s 00:13:57.471 sys 0m0.496s 00:13:57.471 10:08:25 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.471 10:08:25 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.471 ************************************ 00:13:57.471 END TEST bdev_verify_big_io 00:13:57.471 ************************************ 00:13:57.471 10:08:25 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.471 10:08:25 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:57.471 10:08:25 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.471 10:08:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.471 ************************************ 00:13:57.471 START TEST bdev_write_zeroes 00:13:57.471 ************************************ 00:13:57.471 10:08:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.472 [2024-11-03 10:08:25.734942] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:57.472 [2024-11-03 10:08:25.735042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82100 ] 00:13:57.731 [2024-11-03 10:08:25.860212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.731 [2024-11-03 10:08:25.904561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.991 Running I/O for 1 seconds... 00:13:58.936 98912.00 IOPS, 386.38 MiB/s 00:13:58.936 Latency(us) 00:13:58.936 [2024-11-03T10:08:27.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.936 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:58.936 nvme0n1 : 1.02 15884.83 62.05 0.00 0.00 8050.06 5116.85 21173.17 00:13:58.936 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:58.936 nvme1n1 : 1.02 18765.96 73.30 0.00 0.00 6808.80 3755.72 14417.92 00:13:58.936 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:58.936 nvme2n1 : 1.02 15864.26 61.97 0.00 0.00 8049.60 5343.70 18148.43 00:13:58.936 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:58.936 nvme2n2 : 1.02 15846.27 61.90 0.00 0.00 8013.63 4814.38 18350.08 00:13:58.936 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:58.936 nvme2n3 : 1.02 15828.09 61.83 0.00 0.00 8018.05 4965.61 19156.68 00:13:58.936 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:58.936 nvme3n1 : 1.02 15810.21 61.76 0.00 0.00 8022.70 4789.17 18955.03 00:13:58.936 [2024-11-03T10:08:27.298Z] =================================================================================================================== 00:13:58.936 [2024-11-03T10:08:27.298Z] Total : 97999.62 382.81 0.00 0.00 7796.12 3755.72 21173.17 00:13:59.198 00:13:59.198 real 0m1.678s 00:13:59.198 user 0m1.099s 00:13:59.198 sys 0m0.415s 00:13:59.198 10:08:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.198 ************************************ 00:13:59.198 END TEST bdev_write_zeroes 00:13:59.198 10:08:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:59.198 ************************************ 00:13:59.198 10:08:27 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:59.198 10:08:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:59.198 10:08:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.198 10:08:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.198 ************************************ 00:13:59.198 START TEST bdev_json_nonenclosed 00:13:59.198 ************************************ 00:13:59.198 10:08:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:59.198 [2024-11-03 10:08:27.493492] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:59.198 [2024-11-03 10:08:27.493635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82142 ] 00:13:59.459 [2024-11-03 10:08:27.630143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.459 [2024-11-03 10:08:27.699838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.459 [2024-11-03 10:08:27.699988] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:59.459 [2024-11-03 10:08:27.700007] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:59.459 [2024-11-03 10:08:27.700022] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:59.721 00:13:59.721 real 0m0.406s 00:13:59.721 user 0m0.194s 00:13:59.721 sys 0m0.105s 00:13:59.721 10:08:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.721 ************************************ 00:13:59.721 END TEST bdev_json_nonenclosed 00:13:59.721 ************************************ 00:13:59.721 10:08:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:59.721 10:08:27 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:59.721 10:08:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:59.721 10:08:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.721 10:08:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.721 ************************************ 00:13:59.721 START TEST bdev_json_nonarray 00:13:59.721 ************************************ 00:13:59.721 10:08:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:59.721 [2024-11-03 10:08:27.963863] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:59.721 [2024-11-03 10:08:27.964003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82163 ] 00:13:59.983 [2024-11-03 10:08:28.101635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.983 [2024-11-03 10:08:28.171150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.983 [2024-11-03 10:08:28.171325] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:59.983 [2024-11-03 10:08:28.171348] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:59.983 [2024-11-03 10:08:28.171363] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:59.983 00:13:59.983 real 0m0.400s 00:13:59.983 user 0m0.183s 00:13:59.983 sys 0m0.113s 00:13:59.983 10:08:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.983 ************************************ 00:13:59.983 END TEST bdev_json_nonarray 00:13:59.983 ************************************ 00:13:59.983 10:08:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:00.244 10:08:28 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:00.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:03.815 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:03.815 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.389 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.389 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.389 00:14:04.389 real 0m51.358s 00:14:04.389 user 1m19.525s 00:14:04.389 sys 0m34.431s 00:14:04.389 ************************************ 00:14:04.389 END TEST blockdev_xnvme 00:14:04.389 ************************************ 00:14:04.389 10:08:32 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.389 10:08:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.389 10:08:32 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:04.389 10:08:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:04.389 10:08:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.389 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:14:04.389 ************************************ 00:14:04.389 START TEST ublk 00:14:04.389 ************************************ 00:14:04.389 10:08:32 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:04.651 * Looking for test storage... 00:14:04.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:04.651 10:08:32 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.651 10:08:32 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.651 10:08:32 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.651 10:08:32 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.651 10:08:32 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.651 10:08:32 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.651 10:08:32 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.651 10:08:32 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.651 10:08:32 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.651 10:08:32 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.651 10:08:32 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.651 10:08:32 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:04.651 10:08:32 ublk -- scripts/common.sh@345 -- # : 1 00:14:04.651 10:08:32 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.651 10:08:32 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.651 10:08:32 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:04.651 10:08:32 ublk -- scripts/common.sh@353 -- # local d=1 00:14:04.651 10:08:32 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.651 10:08:32 ublk -- scripts/common.sh@355 -- # echo 1 00:14:04.651 10:08:32 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.651 10:08:32 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:04.651 10:08:32 ublk -- scripts/common.sh@353 -- # local d=2 00:14:04.651 10:08:32 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.651 10:08:32 ublk -- scripts/common.sh@355 -- # echo 2 00:14:04.651 10:08:32 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.651 10:08:32 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.651 10:08:32 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.651 10:08:32 ublk -- scripts/common.sh@368 -- # return 0 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.651 --rc genhtml_branch_coverage=1 00:14:04.651 --rc genhtml_function_coverage=1 00:14:04.651 --rc genhtml_legend=1 00:14:04.651 --rc geninfo_all_blocks=1 00:14:04.651 --rc geninfo_unexecuted_blocks=1 00:14:04.651 00:14:04.651 ' 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.651 --rc genhtml_branch_coverage=1 00:14:04.651 --rc genhtml_function_coverage=1 00:14:04.651 --rc genhtml_legend=1 00:14:04.651 --rc geninfo_all_blocks=1 00:14:04.651 --rc geninfo_unexecuted_blocks=1 00:14:04.651 00:14:04.651 ' 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.651 --rc genhtml_branch_coverage=1 00:14:04.651 --rc genhtml_function_coverage=1 00:14:04.651 --rc genhtml_legend=1 00:14:04.651 --rc geninfo_all_blocks=1 00:14:04.651 --rc geninfo_unexecuted_blocks=1 00:14:04.651 00:14:04.651 ' 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.651 --rc genhtml_branch_coverage=1 00:14:04.651 --rc genhtml_function_coverage=1 00:14:04.651 --rc genhtml_legend=1 00:14:04.651 --rc geninfo_all_blocks=1 00:14:04.651 --rc geninfo_unexecuted_blocks=1 00:14:04.651 00:14:04.651 ' 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:04.651 10:08:32 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:04.651 10:08:32 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:04.651 10:08:32 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:04.651 10:08:32 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:04.651 10:08:32 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:04.651 10:08:32 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:04.651 10:08:32 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:04.651 10:08:32 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:04.651 10:08:32 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.651 10:08:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.651 ************************************ 00:14:04.651 START TEST test_save_ublk_config 00:14:04.651 ************************************ 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:04.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82453 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82453 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82453 ']' 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.651 10:08:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:04.651 [2024-11-03 10:08:32.964563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:04.651 [2024-11-03 10:08:32.964709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82453 ] 00:14:04.913 [2024-11-03 10:08:33.103211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.913 [2024-11-03 10:08:33.185069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.486 10:08:33 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.486 10:08:33 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:05.486 10:08:33 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:05.486 10:08:33 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:05.486 10:08:33 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.486 10:08:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:05.486 [2024-11-03 10:08:33.824261] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:05.486 [2024-11-03 10:08:33.824688] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:05.748 malloc0 00:14:05.748 [2024-11-03 10:08:33.864378] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:05.748 [2024-11-03 10:08:33.864478] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:05.748 [2024-11-03 10:08:33.864487] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:05.748 [2024-11-03 10:08:33.864503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.749 [2024-11-03 10:08:33.872479] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.749 [2024-11-03 10:08:33.872510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.749 [2024-11-03 10:08:33.880256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.749 [2024-11-03 10:08:33.880398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:05.749 [2024-11-03 10:08:33.897257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.749 0 00:14:05.749 10:08:33 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.749 10:08:33 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:05.749 10:08:33 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.749 10:08:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:06.011 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.011 10:08:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:06.011 "subsystems": [ 00:14:06.011 { 00:14:06.011 "subsystem": "fsdev", 00:14:06.011 "config": [ 00:14:06.011 { 00:14:06.011 "method": "fsdev_set_opts", 00:14:06.011 "params": { 00:14:06.011 "fsdev_io_pool_size": 65535, 00:14:06.011 "fsdev_io_cache_size": 256 00:14:06.011 } 00:14:06.011 } 00:14:06.011 ] 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "subsystem": "keyring", 00:14:06.011 "config": [] 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "subsystem": "iobuf", 00:14:06.011 "config": [ 00:14:06.011 { 00:14:06.011 "method": "iobuf_set_options", 00:14:06.011 "params": { 00:14:06.011 "small_pool_count": 8192, 00:14:06.011 "large_pool_count": 1024, 00:14:06.011 "small_bufsize": 8192, 00:14:06.011 "large_bufsize": 135168 00:14:06.011 } 00:14:06.011 } 00:14:06.011 ] 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "subsystem": "sock", 00:14:06.011 "config": [ 00:14:06.011 { 00:14:06.011 "method": "sock_set_default_impl", 00:14:06.011 "params": { 00:14:06.011 "impl_name": "posix" 00:14:06.011 } 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "method": "sock_impl_set_options", 00:14:06.011 "params": { 00:14:06.011 "impl_name": "ssl", 00:14:06.011 "recv_buf_size": 4096, 00:14:06.011 "send_buf_size": 4096, 00:14:06.011 "enable_recv_pipe": true, 00:14:06.011 "enable_quickack": false, 00:14:06.011 "enable_placement_id": 0, 00:14:06.011 "enable_zerocopy_send_server": true, 00:14:06.011 "enable_zerocopy_send_client": false, 00:14:06.011 "zerocopy_threshold": 0, 00:14:06.011 "tls_version": 0, 00:14:06.011 "enable_ktls": false 00:14:06.011 } 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "method": "sock_impl_set_options", 00:14:06.011 "params": { 00:14:06.011 "impl_name": "posix", 00:14:06.011 "recv_buf_size": 2097152, 00:14:06.011 "send_buf_size": 2097152, 00:14:06.011 "enable_recv_pipe": true, 00:14:06.011 "enable_quickack": false, 00:14:06.011 "enable_placement_id": 0, 00:14:06.011 "enable_zerocopy_send_server": true, 00:14:06.011 "enable_zerocopy_send_client": false, 00:14:06.011 "zerocopy_threshold": 0, 00:14:06.011 "tls_version": 0, 00:14:06.011 "enable_ktls": false 00:14:06.011 } 00:14:06.011 } 00:14:06.011 ] 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "subsystem": "vmd", 00:14:06.011 "config": [] 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "subsystem": "accel", 00:14:06.011 "config": [ 00:14:06.011 { 00:14:06.011 "method": "accel_set_options", 00:14:06.011 "params": { 00:14:06.011 "small_cache_size": 128, 00:14:06.011 "large_cache_size": 16, 00:14:06.011 "task_count": 2048, 00:14:06.011 "sequence_count": 2048, 00:14:06.011 "buf_count": 2048 00:14:06.011 } 00:14:06.011 } 00:14:06.011 ] 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "subsystem": "bdev", 00:14:06.011 "config": [ 00:14:06.011 { 00:14:06.011 "method": "bdev_set_options", 00:14:06.011 "params": { 00:14:06.011 "bdev_io_pool_size": 65535, 00:14:06.011 "bdev_io_cache_size": 256, 00:14:06.011 "bdev_auto_examine": true, 00:14:06.011 "iobuf_small_cache_size": 128, 00:14:06.011 "iobuf_large_cache_size": 16 00:14:06.011 } 00:14:06.011 }, 00:14:06.011 { 00:14:06.011 "method": "bdev_raid_set_options", 00:14:06.011 "params": { 00:14:06.011 "process_window_size_kb": 1024, 00:14:06.011 "process_max_bandwidth_mb_sec": 0 00:14:06.011 } 00:14:06.011 }, 00:14:06.011 { 00:14:06.012 "method": "bdev_iscsi_set_options", 00:14:06.012 "params": { 00:14:06.012 "timeout_sec": 30 00:14:06.012 } 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "method": "bdev_nvme_set_options", 00:14:06.012 "params": { 00:14:06.012 "action_on_timeout": "none", 00:14:06.012 "timeout_us": 0, 00:14:06.012 "timeout_admin_us": 0, 00:14:06.012 "keep_alive_timeout_ms": 10000, 00:14:06.012 "arbitration_burst": 0, 00:14:06.012 "low_priority_weight": 0, 00:14:06.012 "medium_priority_weight": 0, 00:14:06.012 "high_priority_weight": 0, 00:14:06.012 "nvme_adminq_poll_period_us": 10000, 00:14:06.012 "nvme_ioq_poll_period_us": 0, 00:14:06.012 "io_queue_requests": 0, 00:14:06.012 "delay_cmd_submit": true, 00:14:06.012 "transport_retry_count": 4, 00:14:06.012 "bdev_retry_count": 3, 00:14:06.012 "transport_ack_timeout": 0, 00:14:06.012 "ctrlr_loss_timeout_sec": 0, 00:14:06.012 "reconnect_delay_sec": 0, 00:14:06.012 "fast_io_fail_timeout_sec": 0, 00:14:06.012 "disable_auto_failback": false, 00:14:06.012 "generate_uuids": false, 00:14:06.012 "transport_tos": 0, 00:14:06.012 "nvme_error_stat": false, 00:14:06.012 "rdma_srq_size": 0, 00:14:06.012 "io_path_stat": false, 00:14:06.012 "allow_accel_sequence": false, 00:14:06.012 "rdma_max_cq_size": 0, 00:14:06.012 "rdma_cm_event_timeout_ms": 0, 00:14:06.012 "dhchap_digests": [ 00:14:06.012 "sha256", 00:14:06.012 "sha384", 00:14:06.012 "sha512" 00:14:06.012 ], 00:14:06.012 "dhchap_dhgroups": [ 00:14:06.012 "null", 00:14:06.012 "ffdhe2048", 00:14:06.012 "ffdhe3072", 00:14:06.012 "ffdhe4096", 00:14:06.012 "ffdhe6144", 00:14:06.012 "ffdhe8192" 00:14:06.012 ] 00:14:06.012 } 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "method": "bdev_nvme_set_hotplug", 00:14:06.012 "params": { 00:14:06.012 "period_us": 100000, 00:14:06.012 "enable": false 00:14:06.012 } 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "method": "bdev_malloc_create", 00:14:06.012 "params": { 00:14:06.012 "name": "malloc0", 00:14:06.012 "num_blocks": 8192, 00:14:06.012 "block_size": 4096, 00:14:06.012 "physical_block_size": 4096, 00:14:06.012 "uuid": "714bba07-91cd-4952-91c1-7fef0e2c5e08", 00:14:06.012 "optimal_io_boundary": 0, 00:14:06.012 "md_size": 0, 00:14:06.012 "dif_type": 0, 00:14:06.012 "dif_is_head_of_md": false, 00:14:06.012 "dif_pi_format": 0 00:14:06.012 } 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "method": "bdev_wait_for_examine" 00:14:06.012 } 00:14:06.012 ] 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "subsystem": "scsi", 00:14:06.012 "config": null 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "subsystem": "scheduler", 00:14:06.012 "config": [ 00:14:06.012 { 00:14:06.012 "method": "framework_set_scheduler", 00:14:06.012 "params": { 00:14:06.012 "name": "static" 00:14:06.012 } 00:14:06.012 } 00:14:06.012 ] 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "subsystem": "vhost_scsi", 00:14:06.012 "config": [] 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "subsystem": "vhost_blk", 00:14:06.012 "config": [] 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "subsystem": "ublk", 00:14:06.012 "config": [ 00:14:06.012 { 00:14:06.012 "method": "ublk_create_target", 00:14:06.012 "params": { 00:14:06.012 "cpumask": "1" 00:14:06.012 } 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "method": "ublk_start_disk", 00:14:06.012 "params": { 00:14:06.012 "bdev_name": "malloc0", 00:14:06.012 "ublk_id": 0, 00:14:06.012 "num_queues": 1, 00:14:06.012 "queue_depth": 128 00:14:06.012 } 00:14:06.012 } 00:14:06.012 ] 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "subsystem": "nbd", 00:14:06.012 "config": [] 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "subsystem": "nvmf", 00:14:06.012 "config": [ 00:14:06.012 { 00:14:06.012 "method": "nvmf_set_config", 00:14:06.012 "params": { 00:14:06.012 "discovery_filter": "match_any", 00:14:06.012 "admin_cmd_passthru": { 00:14:06.012 "identify_ctrlr": false 00:14:06.012 }, 00:14:06.012 "dhchap_digests": [ 00:14:06.012 "sha256", 00:14:06.012 "sha384", 00:14:06.012 "sha512" 00:14:06.012 ], 00:14:06.012 "dhchap_dhgroups": [ 00:14:06.012 "null", 00:14:06.012 "ffdhe2048", 00:14:06.012 "ffdhe3072", 00:14:06.012 "ffdhe4096", 00:14:06.012 "ffdhe6144", 00:14:06.012 "ffdhe8192" 00:14:06.012 ] 00:14:06.012 } 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "method": "nvmf_set_max_subsystems", 00:14:06.012 "params": { 00:14:06.012 "max_subsystems": 1024 00:14:06.012 } 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "method": "nvmf_set_crdt", 00:14:06.012 "params": { 00:14:06.012 "crdt1": 0, 00:14:06.012 "crdt2": 0, 00:14:06.012 "crdt3": 0 00:14:06.012 } 00:14:06.012 } 00:14:06.012 ] 00:14:06.012 }, 00:14:06.012 { 00:14:06.012 "subsystem": "iscsi", 00:14:06.012 "config": [ 00:14:06.012 { 00:14:06.012 "method": "iscsi_set_options", 00:14:06.012 "params": { 00:14:06.012 "node_base": "iqn.2016-06.io.spdk", 00:14:06.012 "max_sessions": 128, 00:14:06.012 "max_connections_per_session": 2, 00:14:06.012 "max_queue_depth": 64, 00:14:06.012 "default_time2wait": 2, 00:14:06.012 "default_time2retain": 20, 00:14:06.012 "first_burst_length": 8192, 00:14:06.012 "immediate_data": true, 00:14:06.012 "allow_duplicated_isid": false, 00:14:06.012 "error_recovery_level": 0, 00:14:06.012 "nop_timeout": 60, 00:14:06.012 "nop_in_interval": 30, 00:14:06.012 "disable_chap": false, 00:14:06.012 "require_chap": false, 00:14:06.012 "mutual_chap": false, 00:14:06.012 "chap_group": 0, 00:14:06.012 "max_large_datain_per_connection": 64, 00:14:06.012 "max_r2t_per_connection": 4, 00:14:06.012 "pdu_pool_size": 36864, 00:14:06.012 "immediate_data_pool_size": 16384, 00:14:06.012 "data_out_pool_size": 2048 00:14:06.012 } 00:14:06.012 } 00:14:06.012 ] 00:14:06.012 } 00:14:06.012 ] 00:14:06.012 }' 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82453 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82453 ']' 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82453 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82453 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82453' 00:14:06.012 killing process with pid 82453 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82453 00:14:06.012 10:08:34 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82453 00:14:06.274 [2024-11-03 10:08:34.611154] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.536 [2024-11-03 10:08:34.648384] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.536 [2024-11-03 10:08:34.648542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.536 [2024-11-03 10:08:34.656264] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.536 [2024-11-03 10:08:34.656336] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:06.536 [2024-11-03 10:08:34.656346] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:06.536 [2024-11-03 10:08:34.656387] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:06.536 [2024-11-03 10:08:34.656544] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82491 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82491 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82491 ']' 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:07.220 10:08:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:07.220 "subsystems": [ 00:14:07.220 { 00:14:07.220 "subsystem": "fsdev", 00:14:07.220 "config": [ 00:14:07.220 { 00:14:07.220 "method": "fsdev_set_opts", 00:14:07.220 "params": { 00:14:07.220 "fsdev_io_pool_size": 65535, 00:14:07.220 "fsdev_io_cache_size": 256 00:14:07.220 } 00:14:07.220 } 00:14:07.220 ] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "keyring", 00:14:07.220 "config": [] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "iobuf", 00:14:07.220 "config": [ 00:14:07.220 { 00:14:07.220 "method": "iobuf_set_options", 00:14:07.220 "params": { 00:14:07.220 "small_pool_count": 8192, 00:14:07.220 "large_pool_count": 1024, 00:14:07.220 "small_bufsize": 8192, 00:14:07.220 "large_bufsize": 135168 00:14:07.220 } 00:14:07.220 } 00:14:07.220 ] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "sock", 00:14:07.220 "config": [ 00:14:07.220 { 00:14:07.220 "method": "sock_set_default_impl", 00:14:07.220 "params": { 00:14:07.220 "impl_name": "posix" 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "sock_impl_set_options", 00:14:07.220 "params": { 00:14:07.220 "impl_name": "ssl", 00:14:07.220 "recv_buf_size": 4096, 00:14:07.220 "send_buf_size": 4096, 00:14:07.220 "enable_recv_pipe": true, 00:14:07.220 "enable_quickack": false, 00:14:07.220 "enable_placement_id": 0, 00:14:07.220 "enable_zerocopy_send_server": true, 00:14:07.220 "enable_zerocopy_send_client": false, 00:14:07.220 "zerocopy_threshold": 0, 00:14:07.220 "tls_version": 0, 00:14:07.220 "enable_ktls": false 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "sock_impl_set_options", 00:14:07.220 "params": { 00:14:07.220 "impl_name": "posix", 00:14:07.220 "recv_buf_size": 2097152, 00:14:07.220 "send_buf_size": 2097152, 00:14:07.220 "enable_recv_pipe": true, 00:14:07.220 "enable_quickack": false, 00:14:07.220 "enable_placement_id": 0, 00:14:07.220 "enable_zerocopy_send_server": true, 00:14:07.220 "enable_zerocopy_send_client": false, 00:14:07.220 "zerocopy_threshold": 0, 00:14:07.220 "tls_version": 0, 00:14:07.220 "enable_ktls": false 00:14:07.220 } 00:14:07.220 } 00:14:07.220 ] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "vmd", 00:14:07.220 "config": [] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "accel", 00:14:07.220 "config": [ 00:14:07.220 { 00:14:07.220 "method": "accel_set_options", 00:14:07.220 "params": { 00:14:07.220 "small_cache_size": 128, 00:14:07.220 "large_cache_size": 16, 00:14:07.220 "task_count": 2048, 00:14:07.220 "sequence_count": 2048, 00:14:07.220 "buf_count": 2048 00:14:07.220 } 00:14:07.220 } 00:14:07.220 ] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "bdev", 00:14:07.220 "config": [ 00:14:07.220 { 00:14:07.220 "method": "bdev_set_options", 00:14:07.220 "params": { 00:14:07.220 "bdev_io_pool_size": 65535, 00:14:07.220 "bdev_io_cache_size": 256, 00:14:07.220 "bdev_auto_examine": true, 00:14:07.220 "iobuf_small_cache_size": 128, 00:14:07.220 "iobuf_large_cache_size": 16 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "bdev_raid_set_options", 00:14:07.220 "params": { 00:14:07.220 "process_window_size_kb": 1024, 00:14:07.220 "process_max_bandwidth_mb_sec": 0 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "bdev_iscsi_set_options", 00:14:07.220 "params": { 00:14:07.220 "timeout_sec": 30 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "bdev_nvme_set_options", 00:14:07.220 "params": { 00:14:07.220 "action_on_timeout": "none", 00:14:07.220 "timeout_us": 0, 00:14:07.220 "timeout_admin_us": 0, 00:14:07.220 "keep_alive_timeout_ms": 10000, 00:14:07.220 "arbitration_burst": 0, 00:14:07.220 "low_priority_weight": 0, 00:14:07.220 "medium_priority_weight": 0, 00:14:07.220 "high_priority_weight": 0, 00:14:07.220 "nvme_adminq_poll_period_us": 10000, 00:14:07.220 "nvme_ioq_poll_period_us": 0, 00:14:07.220 "io_queue_requests": 0, 00:14:07.220 "delay_cmd_submit": true, 00:14:07.220 "transport_retry_count": 4, 00:14:07.220 "bdev_retry_count": 3, 00:14:07.220 "transport_ack_timeout": 0, 00:14:07.220 "ctrlr_loss_timeout_sec": 0, 00:14:07.220 "reconnect_delay_sec": 0, 00:14:07.220 "fast_io_fail_timeout_sec": 0, 00:14:07.220 "disable_auto_failback": false, 00:14:07.220 "generate_uuids": false, 00:14:07.220 "transport_tos": 0, 00:14:07.220 "nvme_error_stat": false, 00:14:07.220 "rdma_srq_size": 0, 00:14:07.220 "io_path_stat": false, 00:14:07.220 "allow_accel_sequence": false, 00:14:07.220 "rdma_max_cq_size": 0, 00:14:07.220 "rdma_cm_event_timeout_ms": 0, 00:14:07.220 "dhchap_digests": [ 00:14:07.220 "sha256", 00:14:07.220 "sha384", 00:14:07.220 "sha512" 00:14:07.220 ], 00:14:07.220 "dhchap_dhgroups": [ 00:14:07.220 "null", 00:14:07.220 "ffdhe2048", 00:14:07.220 "ffdhe3072", 00:14:07.220 "ffdhe4096", 00:14:07.220 "ffdhe6144", 00:14:07.220 "ffdhe8192" 00:14:07.220 ] 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "bdev_nvme_set_hotplug", 00:14:07.220 "params": { 00:14:07.220 "period_us": 100000, 00:14:07.220 "enable": false 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "bdev_malloc_create", 00:14:07.220 "params": { 00:14:07.220 "name": "malloc0", 00:14:07.220 "num_blocks": 8192, 00:14:07.220 "block_size": 4096, 00:14:07.220 "physical_block_size": 4096, 00:14:07.220 "uuid": "714bba07-91cd-4952-91c1-7fef0e2c5e08", 00:14:07.220 "optimal_io_boundary": 0, 00:14:07.220 "md_size": 0, 00:14:07.220 "dif_type": 0, 00:14:07.220 "dif_is_head_of_md": false, 00:14:07.220 "dif_pi_format": 0 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "bdev_wait_for_examine" 00:14:07.220 } 00:14:07.220 ] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "scsi", 00:14:07.220 "config": null 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "scheduler", 00:14:07.220 "config": [ 00:14:07.220 { 00:14:07.220 "method": "framework_set_scheduler", 00:14:07.220 "params": { 00:14:07.220 "name": "static" 00:14:07.220 } 00:14:07.220 } 00:14:07.220 ] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "vhost_scsi", 00:14:07.220 "config": [] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "vhost_blk", 00:14:07.220 "config": [] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "ublk", 00:14:07.220 "config": [ 00:14:07.220 { 00:14:07.220 "method": "ublk_create_target", 00:14:07.220 "params": { 00:14:07.220 "cpumask": "1" 00:14:07.220 } 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "method": "ublk_start_disk", 00:14:07.220 "params": { 00:14:07.220 "bdev_name": "malloc0", 00:14:07.220 "ublk_id": 0, 00:14:07.220 "num_queues": 1, 00:14:07.220 "queue_depth": 128 00:14:07.220 } 00:14:07.220 } 00:14:07.220 ] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "nbd", 00:14:07.220 "config": [] 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "subsystem": "nvmf", 00:14:07.220 "config": [ 00:14:07.220 { 00:14:07.220 "method": "nvmf_set_config", 00:14:07.220 "params": { 00:14:07.220 "discovery_filter": "match_any", 00:14:07.220 "admin_cmd_passthru": { 00:14:07.220 "identify_ctrlr": false 00:14:07.220 }, 00:14:07.220 "dhchap_digests": [ 00:14:07.220 "sha256", 00:14:07.221 "sha384", 00:14:07.221 "sha512" 00:14:07.221 ], 00:14:07.221 "dhchap_dhgroups": [ 00:14:07.221 "null", 00:14:07.221 "ffdhe2048", 00:14:07.221 "ffdhe3072", 00:14:07.221 "ffdhe4096", 00:14:07.221 "ffdhe6144", 00:14:07.221 "ffdhe8192" 00:14:07.221 ] 00:14:07.221 } 00:14:07.221 }, 00:14:07.221 { 00:14:07.221 "method": "nvmf_set_max_subsystems", 00:14:07.221 "params": { 00:14:07.221 "max_subsystems": 1024 00:14:07.221 } 00:14:07.221 }, 00:14:07.221 { 00:14:07.221 "method": "nvmf_set_crdt", 00:14:07.221 "params": { 00:14:07.221 "crdt1": 0, 00:14:07.221 "crdt2": 0, 00:14:07.221 "crdt3": 0 00:14:07.221 } 00:14:07.221 } 00:14:07.221 ] 00:14:07.221 }, 00:14:07.221 { 00:14:07.221 "subsystem": "iscsi", 00:14:07.221 "config": [ 00:14:07.221 { 00:14:07.221 "method": "iscsi_set_options", 00:14:07.221 "params": { 00:14:07.221 "node_base": "iqn.2016-06.io.spdk", 00:14:07.221 "max_sessions": 128, 00:14:07.221 "max_connections_per_session": 2, 00:14:07.221 "max_queue_depth": 64, 00:14:07.221 "default_time2wait": 2, 00:14:07.221 "default_time2retain": 20, 00:14:07.221 "first_burst_length": 8192, 00:14:07.221 "immediate_data": true, 00:14:07.221 "allow_duplicated_isid": false, 00:14:07.221 "error_recovery_level": 0, 00:14:07.221 "nop_timeout": 60, 00:14:07.221 "nop_in_interval": 30, 00:14:07.221 "disable_chap": false, 00:14:07.221 "require_chap": false, 00:14:07.221 "mutual_chap": false, 00:14:07.221 "chap_group": 0, 00:14:07.221 "max_large_datain_per_connection": 64, 00:14:07.221 "max_r2t_per_connection": 4, 00:14:07.221 "pdu_pool_size": 36864, 00:14:07.221 "immediate_data_pool_size": 16384, 00:14:07.221 "data_out_pool_size": 2048 00:14:07.221 } 00:14:07.221 } 00:14:07.221 ] 00:14:07.221 } 00:14:07.221 ] 00:14:07.221 }' 00:14:07.221 [2024-11-03 10:08:35.410325] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:07.221 [2024-11-03 10:08:35.410841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82491 ] 00:14:07.221 [2024-11-03 10:08:35.550063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.482 [2024-11-03 10:08:35.630639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.743 [2024-11-03 10:08:36.092255] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:07.743 [2024-11-03 10:08:36.092684] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:07.743 [2024-11-03 10:08:36.100400] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:07.743 [2024-11-03 10:08:36.100494] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:07.743 [2024-11-03 10:08:36.100503] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:07.743 [2024-11-03 10:08:36.100518] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.015 [2024-11-03 10:08:36.108463] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.015 [2024-11-03 10:08:36.108494] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.015 [2024-11-03 10:08:36.116270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.015 [2024-11-03 10:08:36.116399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:08.015 [2024-11-03 10:08:36.133259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82491 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82491 ']' 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82491 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82491 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.015 killing process with pid 82491 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82491' 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82491 00:14:08.015 10:08:36 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82491 00:14:08.589 [2024-11-03 10:08:36.726763] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.589 [2024-11-03 10:08:36.770269] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.589 [2024-11-03 10:08:36.770442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.589 [2024-11-03 10:08:36.780348] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.589 [2024-11-03 10:08:36.780423] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:08.589 [2024-11-03 10:08:36.780447] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:08.589 [2024-11-03 10:08:36.780495] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.589 [2024-11-03 10:08:36.780657] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:09.160 10:08:37 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:09.160 00:14:09.160 real 0m4.577s 00:14:09.160 user 0m2.874s 00:14:09.160 sys 0m2.363s 00:14:09.160 ************************************ 00:14:09.160 END TEST test_save_ublk_config 00:14:09.160 ************************************ 00:14:09.160 10:08:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.160 10:08:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:09.160 10:08:37 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82551 00:14:09.160 10:08:37 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.160 10:08:37 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82551 00:14:09.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.160 10:08:37 ublk -- common/autotest_common.sh@831 -- # '[' -z 82551 ']' 00:14:09.160 10:08:37 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.160 10:08:37 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:09.160 10:08:37 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.160 10:08:37 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.160 10:08:37 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.160 10:08:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.421 [2024-11-03 10:08:37.597543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:09.421 [2024-11-03 10:08:37.597701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82551 ] 00:14:09.421 [2024-11-03 10:08:37.733943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.682 [2024-11-03 10:08:37.804288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.682 [2024-11-03 10:08:37.804360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.255 10:08:38 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.255 10:08:38 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:10.255 10:08:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:10.255 10:08:38 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:10.255 10:08:38 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.255 10:08:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.255 ************************************ 00:14:10.255 START TEST test_create_ublk 00:14:10.255 ************************************ 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:10.255 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.255 [2024-11-03 10:08:38.467255] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:10.255 [2024-11-03 10:08:38.469482] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.255 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:10.255 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.255 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:10.255 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.255 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.255 [2024-11-03 10:08:38.592434] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:10.255 [2024-11-03 10:08:38.592930] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:10.255 [2024-11-03 10:08:38.592951] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:10.255 [2024-11-03 10:08:38.592962] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:10.255 [2024-11-03 10:08:38.600749] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:10.255 [2024-11-03 10:08:38.600787] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:10.255 [2024-11-03 10:08:38.608282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:10.255 [2024-11-03 10:08:38.609058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:10.517 [2024-11-03 10:08:38.632257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:10.517 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:10.517 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.517 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.517 10:08:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:10.517 { 00:14:10.517 "ublk_device": "/dev/ublkb0", 00:14:10.517 "id": 0, 00:14:10.517 "queue_depth": 512, 00:14:10.517 "num_queues": 4, 00:14:10.517 "bdev_name": "Malloc0" 00:14:10.517 } 00:14:10.517 ]' 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:10.517 10:08:38 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:10.517 10:08:38 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:10.778 fio: verification read phase will never start because write phase uses all of runtime 00:14:10.778 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:10.778 fio-3.35 00:14:10.778 Starting 1 process 00:14:20.753 00:14:20.753 fio_test: (groupid=0, jobs=1): err= 0: pid=82596: Sun Nov 3 10:08:49 2024 00:14:20.753 write: IOPS=13.2k, BW=51.4MiB/s (53.9MB/s)(514MiB/10001msec); 0 zone resets 00:14:20.753 clat (usec): min=48, max=12286, avg=75.17, stdev=207.31 00:14:20.753 lat (usec): min=48, max=12286, avg=75.63, stdev=207.37 00:14:20.753 clat percentiles (usec): 00:14:20.753 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:14:20.753 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:14:20.753 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 76], 00:14:20.753 | 99.00th=[ 161], 99.50th=[ 297], 99.90th=[ 3884], 99.95th=[ 4047], 00:14:20.753 | 99.99th=[ 4228] 00:14:20.753 bw ( KiB/s): min=10232, max=62968, per=99.12%, avg=52172.63, stdev=17082.96, samples=19 00:14:20.753 iops : min= 2558, max=15742, avg=13043.16, stdev=4270.74, samples=19 00:14:20.753 lat (usec) : 50=0.03%, 100=98.28%, 250=0.93%, 500=0.34%, 750=0.01% 00:14:20.753 lat (usec) : 1000=0.02% 00:14:20.753 lat (msec) : 2=0.07%, 4=0.26%, 10=0.06%, 20=0.01% 00:14:20.753 cpu : usr=1.64%, sys=9.51%, ctx=131670, majf=0, minf=798 00:14:20.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.753 issued rwts: total=0,131603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.753 00:14:20.753 Run status group 0 (all jobs): 00:14:20.753 WRITE: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=514MiB (539MB), run=10001-10001msec 00:14:20.753 00:14:20.753 Disk stats (read/write): 00:14:20.753 ublkb0: ios=0/130014, merge=0/0, ticks=0/8549, in_queue=8550, util=98.68% 00:14:20.753 10:08:49 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:20.753 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.753 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:20.753 [2024-11-03 10:08:49.055253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:20.753 [2024-11-03 10:08:49.103280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:20.753 [2024-11-03 10:08:49.103942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:20.753 [2024-11-03 10:08:49.111249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:20.753 [2024-11-03 10:08:49.111498] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:20.753 [2024-11-03 10:08:49.111509] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:21.011 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.012 10:08:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.012 [2024-11-03 10:08:49.127314] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:21.012 request: 00:14:21.012 { 00:14:21.012 "ublk_id": 0, 00:14:21.012 "method": "ublk_stop_disk", 00:14:21.012 "req_id": 1 00:14:21.012 } 00:14:21.012 Got JSON-RPC error response 00:14:21.012 response: 00:14:21.012 { 00:14:21.012 "code": -19, 00:14:21.012 "message": "No such device" 00:14:21.012 } 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.012 10:08:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.012 [2024-11-03 10:08:49.143327] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:21.012 [2024-11-03 10:08:49.144628] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:21.012 [2024-11-03 10:08:49.144662] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.012 10:08:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.012 10:08:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:21.012 10:08:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.012 10:08:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:21.012 10:08:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:21.012 10:08:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:21.012 10:08:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.012 10:08:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:21.012 10:08:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:21.012 10:08:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:21.012 00:14:21.012 real 0m10.861s 00:14:21.012 user 0m0.457s 00:14:21.012 sys 0m1.032s 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.012 10:08:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.012 ************************************ 00:14:21.012 END TEST test_create_ublk 00:14:21.012 ************************************ 00:14:21.012 10:08:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:21.012 10:08:49 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:21.012 10:08:49 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.012 10:08:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.012 ************************************ 00:14:21.012 START TEST test_create_multi_ublk 00:14:21.012 ************************************ 00:14:21.012 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:21.012 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:21.012 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.012 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.012 [2024-11-03 10:08:49.370238] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:21.012 [2024-11-03 10:08:49.371349] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:21.012 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.012 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.271 [2024-11-03 10:08:49.454368] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:21.271 [2024-11-03 10:08:49.454681] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:21.271 [2024-11-03 10:08:49.454695] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:21.271 [2024-11-03 10:08:49.454701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.271 [2024-11-03 10:08:49.466268] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.271 [2024-11-03 10:08:49.466284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.271 [2024-11-03 10:08:49.478253] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.271 [2024-11-03 10:08:49.478763] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:21.271 [2024-11-03 10:08:49.512260] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.271 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.271 [2024-11-03 10:08:49.620349] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:21.271 [2024-11-03 10:08:49.620662] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:21.271 [2024-11-03 10:08:49.620675] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:21.271 [2024-11-03 10:08:49.620681] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.529 [2024-11-03 10:08:49.632484] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.529 [2024-11-03 10:08:49.632499] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.529 [2024-11-03 10:08:49.644254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.529 [2024-11-03 10:08:49.644775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:21.529 [2024-11-03 10:08:49.680251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:21.529 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:21.530 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.530 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.530 [2024-11-03 10:08:49.788345] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:21.530 [2024-11-03 10:08:49.788669] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:21.530 [2024-11-03 10:08:49.788683] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:21.530 [2024-11-03 10:08:49.788688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.530 [2024-11-03 10:08:49.800267] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.530 [2024-11-03 10:08:49.800282] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.530 [2024-11-03 10:08:49.812248] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.530 [2024-11-03 10:08:49.812758] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:21.530 [2024-11-03 10:08:49.848258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.530 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.530 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:21.530 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.530 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:21.530 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.530 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.788 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.788 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:21.788 10:08:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:21.788 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.788 10:08:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.788 [2024-11-03 10:08:49.956351] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:21.788 [2024-11-03 10:08:49.956676] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:21.788 [2024-11-03 10:08:49.956688] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:21.788 [2024-11-03 10:08:49.956695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.788 [2024-11-03 10:08:49.968270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.788 [2024-11-03 10:08:49.968292] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.788 [2024-11-03 10:08:49.980245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.788 [2024-11-03 10:08:49.980761] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:21.788 [2024-11-03 10:08:50.005257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:21.788 { 00:14:21.788 "ublk_device": "/dev/ublkb0", 00:14:21.788 "id": 0, 00:14:21.788 "queue_depth": 512, 00:14:21.788 "num_queues": 4, 00:14:21.788 "bdev_name": "Malloc0" 00:14:21.788 }, 00:14:21.788 { 00:14:21.788 "ublk_device": "/dev/ublkb1", 00:14:21.788 "id": 1, 00:14:21.788 "queue_depth": 512, 00:14:21.788 "num_queues": 4, 00:14:21.788 "bdev_name": "Malloc1" 00:14:21.788 }, 00:14:21.788 { 00:14:21.788 "ublk_device": "/dev/ublkb2", 00:14:21.788 "id": 2, 00:14:21.788 "queue_depth": 512, 00:14:21.788 "num_queues": 4, 00:14:21.788 "bdev_name": "Malloc2" 00:14:21.788 }, 00:14:21.788 { 00:14:21.788 "ublk_device": "/dev/ublkb3", 00:14:21.788 "id": 3, 00:14:21.788 "queue_depth": 512, 00:14:21.788 "num_queues": 4, 00:14:21.788 "bdev_name": "Malloc3" 00:14:21.788 } 00:14:21.788 ]' 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:21.788 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.047 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.306 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.564 [2024-11-03 10:08:50.700315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.564 [2024-11-03 10:08:50.740281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.564 [2024-11-03 10:08:50.741086] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.564 [2024-11-03 10:08:50.748255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.564 [2024-11-03 10:08:50.748512] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:22.564 [2024-11-03 10:08:50.748523] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.564 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.564 [2024-11-03 10:08:50.764294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.565 [2024-11-03 10:08:50.795796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.565 [2024-11-03 10:08:50.796827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.565 [2024-11-03 10:08:50.804254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.565 [2024-11-03 10:08:50.804488] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:22.565 [2024-11-03 10:08:50.804498] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.565 [2024-11-03 10:08:50.818328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.565 [2024-11-03 10:08:50.852805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.565 [2024-11-03 10:08:50.853801] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.565 [2024-11-03 10:08:50.860247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.565 [2024-11-03 10:08:50.860471] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:22.565 [2024-11-03 10:08:50.860481] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.565 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.565 [2024-11-03 10:08:50.875329] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.565 [2024-11-03 10:08:50.916275] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.565 [2024-11-03 10:08:50.916900] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.565 [2024-11-03 10:08:50.924255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.565 [2024-11-03 10:08:50.924489] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:22.565 [2024-11-03 10:08:50.924498] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:22.823 10:08:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.823 10:08:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:22.823 [2024-11-03 10:08:51.116301] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:22.823 [2024-11-03 10:08:51.117528] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:22.823 [2024-11-03 10:08:51.117556] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:22.823 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:22.823 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.823 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:22.823 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.823 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:23.082 10:08:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:23.341 00:14:23.341 real 0m2.173s 00:14:23.341 user 0m0.833s 00:14:23.341 sys 0m0.132s 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.341 10:08:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 ************************************ 00:14:23.341 END TEST test_create_multi_ublk 00:14:23.341 ************************************ 00:14:23.341 10:08:51 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:23.341 10:08:51 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:23.341 10:08:51 ublk -- ublk/ublk.sh@130 -- # killprocess 82551 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@950 -- # '[' -z 82551 ']' 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@954 -- # kill -0 82551 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@955 -- # uname 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82551 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.341 killing process with pid 82551 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82551' 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@969 -- # kill 82551 00:14:23.341 10:08:51 ublk -- common/autotest_common.sh@974 -- # wait 82551 00:14:23.600 [2024-11-03 10:08:51.799527] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:23.600 [2024-11-03 10:08:51.799597] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:23.860 00:14:23.860 real 0m19.376s 00:14:23.860 user 0m27.885s 00:14:23.860 sys 0m8.959s 00:14:23.860 10:08:52 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:23.860 10:08:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.860 ************************************ 00:14:23.860 END TEST ublk 00:14:23.860 ************************************ 00:14:23.860 10:08:52 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:23.860 10:08:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:23.860 10:08:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.860 10:08:52 -- common/autotest_common.sh@10 -- # set +x 00:14:23.860 ************************************ 00:14:23.860 START TEST ublk_recovery 00:14:23.860 ************************************ 00:14:23.860 10:08:52 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:23.860 * Looking for test storage... 00:14:23.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:23.860 10:08:52 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:23.860 10:08:52 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:23.860 10:08:52 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:24.120 10:08:52 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.120 10:08:52 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:24.120 10:08:52 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.120 10:08:52 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.120 --rc genhtml_branch_coverage=1 00:14:24.120 --rc genhtml_function_coverage=1 00:14:24.120 --rc genhtml_legend=1 00:14:24.120 --rc geninfo_all_blocks=1 00:14:24.120 --rc geninfo_unexecuted_blocks=1 00:14:24.120 00:14:24.120 ' 00:14:24.120 10:08:52 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.120 --rc genhtml_branch_coverage=1 00:14:24.120 --rc genhtml_function_coverage=1 00:14:24.120 --rc genhtml_legend=1 00:14:24.120 --rc geninfo_all_blocks=1 00:14:24.120 --rc geninfo_unexecuted_blocks=1 00:14:24.120 00:14:24.120 ' 00:14:24.120 10:08:52 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.120 --rc genhtml_branch_coverage=1 00:14:24.120 --rc genhtml_function_coverage=1 00:14:24.120 --rc genhtml_legend=1 00:14:24.120 --rc geninfo_all_blocks=1 00:14:24.120 --rc geninfo_unexecuted_blocks=1 00:14:24.120 00:14:24.120 ' 00:14:24.121 10:08:52 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.121 --rc genhtml_branch_coverage=1 00:14:24.121 --rc genhtml_function_coverage=1 00:14:24.121 --rc genhtml_legend=1 00:14:24.121 --rc geninfo_all_blocks=1 00:14:24.121 --rc geninfo_unexecuted_blocks=1 00:14:24.121 00:14:24.121 ' 00:14:24.121 10:08:52 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:24.121 10:08:52 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:24.121 10:08:52 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:24.121 10:08:52 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:24.121 10:08:52 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:24.121 10:08:52 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:24.121 10:08:52 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:24.121 10:08:52 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:24.121 10:08:52 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:24.121 10:08:52 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:24.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.121 10:08:52 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82921 00:14:24.121 10:08:52 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:24.121 10:08:52 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82921 00:14:24.121 10:08:52 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82921 ']' 00:14:24.121 10:08:52 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.121 10:08:52 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.121 10:08:52 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.121 10:08:52 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:24.121 10:08:52 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.121 10:08:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.121 [2024-11-03 10:08:52.336121] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:24.121 [2024-11-03 10:08:52.336250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82921 ] 00:14:24.121 [2024-11-03 10:08:52.468755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:24.379 [2024-11-03 10:08:52.509581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.379 [2024-11-03 10:08:52.509663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:24.945 10:08:53 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.945 [2024-11-03 10:08:53.084242] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:24.945 [2024-11-03 10:08:53.085435] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.945 10:08:53 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.945 malloc0 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.945 10:08:53 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.945 [2024-11-03 10:08:53.124337] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:24.945 [2024-11-03 10:08:53.124417] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:24.945 [2024-11-03 10:08:53.124429] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:24.945 [2024-11-03 10:08:53.124437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:24.945 [2024-11-03 10:08:53.132394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:24.945 [2024-11-03 10:08:53.132430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:24.945 [2024-11-03 10:08:53.140251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:24.945 [2024-11-03 10:08:53.140375] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:24.945 [2024-11-03 10:08:53.151266] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:24.945 1 00:14:24.945 10:08:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.945 10:08:53 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:25.880 10:08:54 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82949 00:14:25.880 10:08:54 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:25.880 10:08:54 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:26.139 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:26.139 fio-3.35 00:14:26.139 Starting 1 process 00:14:31.456 10:08:59 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82921 00:14:31.456 10:08:59 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:36.751 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82921 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:36.751 10:09:04 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83058 00:14:36.751 10:09:04 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.751 10:09:04 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83058 00:14:36.751 10:09:04 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83058 ']' 00:14:36.751 10:09:04 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:36.751 10:09:04 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.751 10:09:04 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.751 10:09:04 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.751 10:09:04 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.751 10:09:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.751 [2024-11-03 10:09:04.244038] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:36.751 [2024-11-03 10:09:04.244185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83058 ] 00:14:36.751 [2024-11-03 10:09:04.379025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:36.751 [2024-11-03 10:09:04.421289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.751 [2024-11-03 10:09:04.421351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:36.751 10:09:05 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.751 [2024-11-03 10:09:05.041243] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:36.751 [2024-11-03 10:09:05.042511] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.751 10:09:05 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.751 malloc0 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.751 10:09:05 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.751 10:09:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.751 [2024-11-03 10:09:05.081370] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:36.751 [2024-11-03 10:09:05.081403] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:36.751 [2024-11-03 10:09:05.081416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:36.752 [2024-11-03 10:09:05.089287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:36.752 [2024-11-03 10:09:05.089310] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:36.752 [2024-11-03 10:09:05.089321] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:36.752 [2024-11-03 10:09:05.089390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:36.752 1 00:14:36.752 10:09:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.752 10:09:05 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82949 00:14:36.752 [2024-11-03 10:09:05.097251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:36.752 [2024-11-03 10:09:05.103611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:36.752 [2024-11-03 10:09:05.111436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:36.752 [2024-11-03 10:09:05.111452] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:32.981 00:15:32.981 fio_test: (groupid=0, jobs=1): err= 0: pid=82952: Sun Nov 3 10:09:54 2024 00:15:32.981 read: IOPS=25.5k, BW=99.8MiB/s (105MB/s)(5988MiB/60001msec) 00:15:32.981 slat (nsec): min=1262, max=136655, avg=5532.42, stdev=1428.61 00:15:32.981 clat (usec): min=648, max=5956.8k, avg=2468.82, stdev=39362.16 00:15:32.981 lat (usec): min=653, max=5956.8k, avg=2474.35, stdev=39362.16 00:15:32.981 clat percentiles (usec): 00:15:32.981 | 1.00th=[ 1844], 5.00th=[ 1975], 10.00th=[ 1991], 20.00th=[ 2024], 00:15:32.981 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:15:32.981 | 70.00th=[ 2114], 80.00th=[ 2114], 90.00th=[ 2180], 95.00th=[ 3228], 00:15:32.981 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7439], 99.95th=[ 8586], 00:15:32.981 | 99.99th=[13042] 00:15:32.981 bw ( KiB/s): min=28320, max=118728, per=100.00%, avg=112550.89, stdev=13744.33, samples=108 00:15:32.981 iops : min= 7080, max=29682, avg=28137.72, stdev=3436.08, samples=108 00:15:32.981 write: IOPS=25.5k, BW=99.7MiB/s (105MB/s)(5980MiB/60001msec); 0 zone resets 00:15:32.981 slat (nsec): min=1331, max=192554, avg=5767.85, stdev=1470.10 00:15:32.981 clat (usec): min=668, max=5956.9k, avg=2532.35, stdev=37582.45 00:15:32.981 lat (usec): min=673, max=5957.0k, avg=2538.12, stdev=37582.45 00:15:32.981 clat percentiles (usec): 00:15:32.981 | 1.00th=[ 1893], 5.00th=[ 2057], 10.00th=[ 2089], 20.00th=[ 2114], 00:15:32.981 | 30.00th=[ 2147], 40.00th=[ 2147], 50.00th=[ 2180], 60.00th=[ 2180], 00:15:32.981 | 70.00th=[ 2212], 80.00th=[ 2212], 90.00th=[ 2278], 95.00th=[ 3163], 00:15:32.981 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7504], 99.95th=[ 8586], 00:15:32.981 | 99.99th=[13042] 00:15:32.981 bw ( KiB/s): min=28360, max=117664, per=100.00%, avg=112404.59, stdev=13783.29, samples=108 00:15:32.981 iops : min= 7090, max=29416, avg=28101.15, stdev=3445.82, samples=108 00:15:32.981 lat (usec) : 750=0.01%, 1000=0.01% 00:15:32.981 lat (msec) : 2=6.88%, 4=89.98%, 10=3.10%, 20=0.03%, >=2000=0.01% 00:15:32.981 cpu : usr=5.47%, sys=29.41%, ctx=101018, majf=0, minf=14 00:15:32.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:32.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.981 issued rwts: total=1532918,1530932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.981 00:15:32.981 Run status group 0 (all jobs): 00:15:32.981 READ: bw=99.8MiB/s (105MB/s), 99.8MiB/s-99.8MiB/s (105MB/s-105MB/s), io=5988MiB (6279MB), run=60001-60001msec 00:15:32.981 WRITE: bw=99.7MiB/s (105MB/s), 99.7MiB/s-99.7MiB/s (105MB/s-105MB/s), io=5980MiB (6271MB), run=60001-60001msec 00:15:32.981 00:15:32.981 Disk stats (read/write): 00:15:32.981 ublkb1: ios=1529713/1527888, merge=0/0, ticks=3692748/3652419, in_queue=7345168, util=99.89% 00:15:32.981 10:09:54 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:32.981 10:09:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.981 10:09:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.981 [2024-11-03 10:09:54.413988] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:32.981 [2024-11-03 10:09:54.450376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:32.981 [2024-11-03 10:09:54.450528] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:32.981 [2024-11-03 10:09:54.461249] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:32.981 [2024-11-03 10:09:54.461359] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:32.981 [2024-11-03 10:09:54.461366] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:32.981 10:09:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.981 10:09:54 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:32.981 10:09:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.981 10:09:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.981 [2024-11-03 10:09:54.477329] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:32.981 [2024-11-03 10:09:54.478631] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:32.982 [2024-11-03 10:09:54.478662] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.982 10:09:54 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:32.982 10:09:54 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:32.982 10:09:54 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83058 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83058 ']' 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83058 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83058 00:15:32.982 killing process with pid 83058 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83058' 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83058 00:15:32.982 10:09:54 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83058 00:15:32.982 [2024-11-03 10:09:54.734529] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:32.982 [2024-11-03 10:09:54.734591] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:32.982 00:15:32.982 real 1m3.013s 00:15:32.982 user 1m40.481s 00:15:32.982 sys 0m35.680s 00:15:32.982 10:09:55 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.982 ************************************ 00:15:32.982 END TEST ublk_recovery 00:15:32.982 ************************************ 00:15:32.982 10:09:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.982 10:09:55 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:32.982 10:09:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.982 10:09:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.982 10:09:55 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:32.982 10:09:55 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:32.982 10:09:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:32.982 10:09:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.982 10:09:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.982 ************************************ 00:15:32.982 START TEST ftl 00:15:32.982 ************************************ 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:32.982 * Looking for test storage... 00:15:32.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:32.982 10:09:55 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.982 10:09:55 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.982 10:09:55 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.982 10:09:55 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.982 10:09:55 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.982 10:09:55 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.982 10:09:55 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.982 10:09:55 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.982 10:09:55 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.982 10:09:55 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.982 10:09:55 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.982 10:09:55 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:32.982 10:09:55 ftl -- scripts/common.sh@345 -- # : 1 00:15:32.982 10:09:55 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.982 10:09:55 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.982 10:09:55 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:32.982 10:09:55 ftl -- scripts/common.sh@353 -- # local d=1 00:15:32.982 10:09:55 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.982 10:09:55 ftl -- scripts/common.sh@355 -- # echo 1 00:15:32.982 10:09:55 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.982 10:09:55 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:32.982 10:09:55 ftl -- scripts/common.sh@353 -- # local d=2 00:15:32.982 10:09:55 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.982 10:09:55 ftl -- scripts/common.sh@355 -- # echo 2 00:15:32.982 10:09:55 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.982 10:09:55 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.982 10:09:55 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.982 10:09:55 ftl -- scripts/common.sh@368 -- # return 0 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:32.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.982 --rc genhtml_branch_coverage=1 00:15:32.982 --rc genhtml_function_coverage=1 00:15:32.982 --rc genhtml_legend=1 00:15:32.982 --rc geninfo_all_blocks=1 00:15:32.982 --rc geninfo_unexecuted_blocks=1 00:15:32.982 00:15:32.982 ' 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:32.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.982 --rc genhtml_branch_coverage=1 00:15:32.982 --rc genhtml_function_coverage=1 00:15:32.982 --rc genhtml_legend=1 00:15:32.982 --rc geninfo_all_blocks=1 00:15:32.982 --rc geninfo_unexecuted_blocks=1 00:15:32.982 00:15:32.982 ' 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:32.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.982 --rc genhtml_branch_coverage=1 00:15:32.982 --rc genhtml_function_coverage=1 00:15:32.982 --rc genhtml_legend=1 00:15:32.982 --rc geninfo_all_blocks=1 00:15:32.982 --rc geninfo_unexecuted_blocks=1 00:15:32.982 00:15:32.982 ' 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:32.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.982 --rc genhtml_branch_coverage=1 00:15:32.982 --rc genhtml_function_coverage=1 00:15:32.982 --rc genhtml_legend=1 00:15:32.982 --rc geninfo_all_blocks=1 00:15:32.982 --rc geninfo_unexecuted_blocks=1 00:15:32.982 00:15:32.982 ' 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:32.982 10:09:55 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:32.982 10:09:55 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.982 10:09:55 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.982 10:09:55 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:32.982 10:09:55 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:32.982 10:09:55 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.982 10:09:55 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:32.982 10:09:55 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:32.982 10:09:55 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.982 10:09:55 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.982 10:09:55 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:32.982 10:09:55 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:32.982 10:09:55 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:32.982 10:09:55 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:32.982 10:09:55 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:32.982 10:09:55 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:32.982 10:09:55 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.982 10:09:55 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.982 10:09:55 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:32.982 10:09:55 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:32.982 10:09:55 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:32.982 10:09:55 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:32.982 10:09:55 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:32.982 10:09:55 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:32.982 10:09:55 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:32.982 10:09:55 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:32.982 10:09:55 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:32.982 10:09:55 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:32.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.982 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.982 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.982 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.982 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83853 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83853 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@831 -- # '[' -z 83853 ']' 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.982 10:09:55 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:32.982 10:09:55 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.983 10:09:55 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.983 10:09:55 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.983 10:09:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:32.983 [2024-11-03 10:09:55.962736] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:32.983 [2024-11-03 10:09:55.963124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83853 ] 00:15:32.983 [2024-11-03 10:09:56.097568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.983 [2024-11-03 10:09:56.161072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.983 10:09:56 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.983 10:09:56 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:32.983 10:09:56 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:32.983 10:09:56 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:32.983 10:09:57 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:32.983 10:09:57 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:32.983 10:09:57 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:32.983 10:09:57 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:32.983 10:09:57 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@50 -- # break 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@63 -- # break 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@66 -- # killprocess 83853 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@950 -- # '[' -z 83853 ']' 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@954 -- # kill -0 83853 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@955 -- # uname 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83853 00:15:32.983 killing process with pid 83853 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83853' 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@969 -- # kill 83853 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@974 -- # wait 83853 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:32.983 10:09:58 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.983 10:09:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:32.983 ************************************ 00:15:32.983 START TEST ftl_fio_basic 00:15:32.983 ************************************ 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:32.983 * Looking for test storage... 00:15:32.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:32.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.983 --rc genhtml_branch_coverage=1 00:15:32.983 --rc genhtml_function_coverage=1 00:15:32.983 --rc genhtml_legend=1 00:15:32.983 --rc geninfo_all_blocks=1 00:15:32.983 --rc geninfo_unexecuted_blocks=1 00:15:32.983 00:15:32.983 ' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:32.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.983 --rc genhtml_branch_coverage=1 00:15:32.983 --rc genhtml_function_coverage=1 00:15:32.983 --rc genhtml_legend=1 00:15:32.983 --rc geninfo_all_blocks=1 00:15:32.983 --rc geninfo_unexecuted_blocks=1 00:15:32.983 00:15:32.983 ' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:32.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.983 --rc genhtml_branch_coverage=1 00:15:32.983 --rc genhtml_function_coverage=1 00:15:32.983 --rc genhtml_legend=1 00:15:32.983 --rc geninfo_all_blocks=1 00:15:32.983 --rc geninfo_unexecuted_blocks=1 00:15:32.983 00:15:32.983 ' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:32.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.983 --rc genhtml_branch_coverage=1 00:15:32.983 --rc genhtml_function_coverage=1 00:15:32.983 --rc genhtml_legend=1 00:15:32.983 --rc geninfo_all_blocks=1 00:15:32.983 --rc geninfo_unexecuted_blocks=1 00:15:32.983 00:15:32.983 ' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:32.983 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83974 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83974 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 83974 ']' 00:15:32.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.984 10:09:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:32.984 [2024-11-03 10:09:58.844475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:32.984 [2024-11-03 10:09:58.844848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83974 ] 00:15:32.984 [2024-11-03 10:09:58.979063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:32.984 [2024-11-03 10:09:59.020894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.984 [2024-11-03 10:09:59.021133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.984 [2024-11-03 10:09:59.021178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:32.984 10:09:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:32.984 { 00:15:32.984 "name": "nvme0n1", 00:15:32.984 "aliases": [ 00:15:32.984 "45bf8911-9860-446c-bf58-24be9f5c22b8" 00:15:32.984 ], 00:15:32.984 "product_name": "NVMe disk", 00:15:32.984 "block_size": 4096, 00:15:32.984 "num_blocks": 1310720, 00:15:32.984 "uuid": "45bf8911-9860-446c-bf58-24be9f5c22b8", 00:15:32.984 "numa_id": -1, 00:15:32.984 "assigned_rate_limits": { 00:15:32.984 "rw_ios_per_sec": 0, 00:15:32.984 "rw_mbytes_per_sec": 0, 00:15:32.984 "r_mbytes_per_sec": 0, 00:15:32.984 "w_mbytes_per_sec": 0 00:15:32.984 }, 00:15:32.984 "claimed": false, 00:15:32.984 "zoned": false, 00:15:32.984 "supported_io_types": { 00:15:32.984 "read": true, 00:15:32.984 "write": true, 00:15:32.984 "unmap": true, 00:15:32.984 "flush": true, 00:15:32.984 "reset": true, 00:15:32.984 "nvme_admin": true, 00:15:32.984 "nvme_io": true, 00:15:32.984 "nvme_io_md": false, 00:15:32.984 "write_zeroes": true, 00:15:32.984 "zcopy": false, 00:15:32.984 "get_zone_info": false, 00:15:32.984 "zone_management": false, 00:15:32.984 "zone_append": false, 00:15:32.984 "compare": true, 00:15:32.984 "compare_and_write": false, 00:15:32.984 "abort": true, 00:15:32.984 "seek_hole": false, 00:15:32.984 "seek_data": false, 00:15:32.984 "copy": true, 00:15:32.984 "nvme_iov_md": false 00:15:32.984 }, 00:15:32.984 "driver_specific": { 00:15:32.984 "nvme": [ 00:15:32.984 { 00:15:32.984 "pci_address": "0000:00:11.0", 00:15:32.984 "trid": { 00:15:32.984 "trtype": "PCIe", 00:15:32.984 "traddr": "0000:00:11.0" 00:15:32.984 }, 00:15:32.984 "ctrlr_data": { 00:15:32.984 "cntlid": 0, 00:15:32.984 "vendor_id": "0x1b36", 00:15:32.984 "model_number": "QEMU NVMe Ctrl", 00:15:32.984 "serial_number": "12341", 00:15:32.984 "firmware_revision": "8.0.0", 00:15:32.984 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:32.984 "oacs": { 00:15:32.984 "security": 0, 00:15:32.984 "format": 1, 00:15:32.984 "firmware": 0, 00:15:32.984 "ns_manage": 1 00:15:32.984 }, 00:15:32.984 "multi_ctrlr": false, 00:15:32.984 "ana_reporting": false 00:15:32.984 }, 00:15:32.984 "vs": { 00:15:32.984 "nvme_version": "1.4" 00:15:32.984 }, 00:15:32.984 "ns_data": { 00:15:32.984 "id": 1, 00:15:32.984 "can_share": false 00:15:32.984 } 00:15:32.984 } 00:15:32.984 ], 00:15:32.984 "mp_policy": "active_passive" 00:15:32.984 } 00:15:32.984 } 00:15:32.984 ]' 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=8e611ff1-0f71-4697-81cc-477c73f4ca79 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8e611ff1-0f71-4697-81cc-477c73f4ca79 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:32.984 10:10:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:32.984 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:32.984 { 00:15:32.984 "name": "51bf08d3-566a-429b-8ad8-f486b5f57191", 00:15:32.984 "aliases": [ 00:15:32.984 "lvs/nvme0n1p0" 00:15:32.984 ], 00:15:32.984 "product_name": "Logical Volume", 00:15:32.984 "block_size": 4096, 00:15:32.984 "num_blocks": 26476544, 00:15:32.984 "uuid": "51bf08d3-566a-429b-8ad8-f486b5f57191", 00:15:32.984 "assigned_rate_limits": { 00:15:32.984 "rw_ios_per_sec": 0, 00:15:32.984 "rw_mbytes_per_sec": 0, 00:15:32.984 "r_mbytes_per_sec": 0, 00:15:32.984 "w_mbytes_per_sec": 0 00:15:32.984 }, 00:15:32.984 "claimed": false, 00:15:32.984 "zoned": false, 00:15:32.984 "supported_io_types": { 00:15:32.984 "read": true, 00:15:32.984 "write": true, 00:15:32.984 "unmap": true, 00:15:32.984 "flush": false, 00:15:32.984 "reset": true, 00:15:32.984 "nvme_admin": false, 00:15:32.984 "nvme_io": false, 00:15:32.984 "nvme_io_md": false, 00:15:32.984 "write_zeroes": true, 00:15:32.984 "zcopy": false, 00:15:32.984 "get_zone_info": false, 00:15:32.984 "zone_management": false, 00:15:32.984 "zone_append": false, 00:15:32.984 "compare": false, 00:15:32.984 "compare_and_write": false, 00:15:32.984 "abort": false, 00:15:32.984 "seek_hole": true, 00:15:32.984 "seek_data": true, 00:15:32.984 "copy": false, 00:15:32.984 "nvme_iov_md": false 00:15:32.984 }, 00:15:32.984 "driver_specific": { 00:15:32.984 "lvol": { 00:15:32.984 "lvol_store_uuid": "8e611ff1-0f71-4697-81cc-477c73f4ca79", 00:15:32.984 "base_bdev": "nvme0n1", 00:15:32.985 "thin_provision": true, 00:15:32.985 "num_allocated_clusters": 0, 00:15:32.985 "snapshot": false, 00:15:32.985 "clone": false, 00:15:32.985 "esnap_clone": false 00:15:32.985 } 00:15:32.985 } 00:15:32.985 } 00:15:32.985 ]' 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:32.985 10:10:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:33.242 { 00:15:33.242 "name": "51bf08d3-566a-429b-8ad8-f486b5f57191", 00:15:33.242 "aliases": [ 00:15:33.242 "lvs/nvme0n1p0" 00:15:33.242 ], 00:15:33.242 "product_name": "Logical Volume", 00:15:33.242 "block_size": 4096, 00:15:33.242 "num_blocks": 26476544, 00:15:33.242 "uuid": "51bf08d3-566a-429b-8ad8-f486b5f57191", 00:15:33.242 "assigned_rate_limits": { 00:15:33.242 "rw_ios_per_sec": 0, 00:15:33.242 "rw_mbytes_per_sec": 0, 00:15:33.242 "r_mbytes_per_sec": 0, 00:15:33.242 "w_mbytes_per_sec": 0 00:15:33.242 }, 00:15:33.242 "claimed": false, 00:15:33.242 "zoned": false, 00:15:33.242 "supported_io_types": { 00:15:33.242 "read": true, 00:15:33.242 "write": true, 00:15:33.242 "unmap": true, 00:15:33.242 "flush": false, 00:15:33.242 "reset": true, 00:15:33.242 "nvme_admin": false, 00:15:33.242 "nvme_io": false, 00:15:33.242 "nvme_io_md": false, 00:15:33.242 "write_zeroes": true, 00:15:33.242 "zcopy": false, 00:15:33.242 "get_zone_info": false, 00:15:33.242 "zone_management": false, 00:15:33.242 "zone_append": false, 00:15:33.242 "compare": false, 00:15:33.242 "compare_and_write": false, 00:15:33.242 "abort": false, 00:15:33.242 "seek_hole": true, 00:15:33.242 "seek_data": true, 00:15:33.242 "copy": false, 00:15:33.242 "nvme_iov_md": false 00:15:33.242 }, 00:15:33.242 "driver_specific": { 00:15:33.242 "lvol": { 00:15:33.242 "lvol_store_uuid": "8e611ff1-0f71-4697-81cc-477c73f4ca79", 00:15:33.242 "base_bdev": "nvme0n1", 00:15:33.242 "thin_provision": true, 00:15:33.242 "num_allocated_clusters": 0, 00:15:33.242 "snapshot": false, 00:15:33.242 "clone": false, 00:15:33.242 "esnap_clone": false 00:15:33.242 } 00:15:33.242 } 00:15:33.242 } 00:15:33.242 ]' 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:33.242 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:33.499 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:33.499 10:10:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51bf08d3-566a-429b-8ad8-f486b5f57191 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:33.758 { 00:15:33.758 "name": "51bf08d3-566a-429b-8ad8-f486b5f57191", 00:15:33.758 "aliases": [ 00:15:33.758 "lvs/nvme0n1p0" 00:15:33.758 ], 00:15:33.758 "product_name": "Logical Volume", 00:15:33.758 "block_size": 4096, 00:15:33.758 "num_blocks": 26476544, 00:15:33.758 "uuid": "51bf08d3-566a-429b-8ad8-f486b5f57191", 00:15:33.758 "assigned_rate_limits": { 00:15:33.758 "rw_ios_per_sec": 0, 00:15:33.758 "rw_mbytes_per_sec": 0, 00:15:33.758 "r_mbytes_per_sec": 0, 00:15:33.758 "w_mbytes_per_sec": 0 00:15:33.758 }, 00:15:33.758 "claimed": false, 00:15:33.758 "zoned": false, 00:15:33.758 "supported_io_types": { 00:15:33.758 "read": true, 00:15:33.758 "write": true, 00:15:33.758 "unmap": true, 00:15:33.758 "flush": false, 00:15:33.758 "reset": true, 00:15:33.758 "nvme_admin": false, 00:15:33.758 "nvme_io": false, 00:15:33.758 "nvme_io_md": false, 00:15:33.758 "write_zeroes": true, 00:15:33.758 "zcopy": false, 00:15:33.758 "get_zone_info": false, 00:15:33.758 "zone_management": false, 00:15:33.758 "zone_append": false, 00:15:33.758 "compare": false, 00:15:33.758 "compare_and_write": false, 00:15:33.758 "abort": false, 00:15:33.758 "seek_hole": true, 00:15:33.758 "seek_data": true, 00:15:33.758 "copy": false, 00:15:33.758 "nvme_iov_md": false 00:15:33.758 }, 00:15:33.758 "driver_specific": { 00:15:33.758 "lvol": { 00:15:33.758 "lvol_store_uuid": "8e611ff1-0f71-4697-81cc-477c73f4ca79", 00:15:33.758 "base_bdev": "nvme0n1", 00:15:33.758 "thin_provision": true, 00:15:33.758 "num_allocated_clusters": 0, 00:15:33.758 "snapshot": false, 00:15:33.758 "clone": false, 00:15:33.758 "esnap_clone": false 00:15:33.758 } 00:15:33.758 } 00:15:33.758 } 00:15:33.758 ]' 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:33.758 10:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 51bf08d3-566a-429b-8ad8-f486b5f57191 -c nvc0n1p0 --l2p_dram_limit 60 00:15:34.019 [2024-11-03 10:10:02.295543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.295586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:34.019 [2024-11-03 10:10:02.295599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:34.019 [2024-11-03 10:10:02.295615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.295669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.295678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:34.019 [2024-11-03 10:10:02.295686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:34.019 [2024-11-03 10:10:02.295695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.295721] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:34.019 [2024-11-03 10:10:02.295940] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:34.019 [2024-11-03 10:10:02.295952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.295968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:34.019 [2024-11-03 10:10:02.295975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:15:34.019 [2024-11-03 10:10:02.295990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.296019] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7454fe60-8bbc-41db-8a78-4560fd27338c 00:15:34.019 [2024-11-03 10:10:02.297339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.297364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:34.019 [2024-11-03 10:10:02.297377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:34.019 [2024-11-03 10:10:02.297383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.303993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.304024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:34.019 [2024-11-03 10:10:02.304034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.525 ms 00:15:34.019 [2024-11-03 10:10:02.304040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.304141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.304156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:34.019 [2024-11-03 10:10:02.304174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:15:34.019 [2024-11-03 10:10:02.304188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.304242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.304250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:34.019 [2024-11-03 10:10:02.304258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:34.019 [2024-11-03 10:10:02.304264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.304302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:34.019 [2024-11-03 10:10:02.305869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.305895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:34.019 [2024-11-03 10:10:02.305912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:15:34.019 [2024-11-03 10:10:02.305920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.305957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.305966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:34.019 [2024-11-03 10:10:02.305981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:34.019 [2024-11-03 10:10:02.305991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.306015] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:34.019 [2024-11-03 10:10:02.306151] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:34.019 [2024-11-03 10:10:02.306165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:34.019 [2024-11-03 10:10:02.306185] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:34.019 [2024-11-03 10:10:02.306194] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:34.019 [2024-11-03 10:10:02.306203] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:34.019 [2024-11-03 10:10:02.306209] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:34.019 [2024-11-03 10:10:02.306218] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:34.019 [2024-11-03 10:10:02.306233] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:34.019 [2024-11-03 10:10:02.306240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:34.019 [2024-11-03 10:10:02.306247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.306254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:34.019 [2024-11-03 10:10:02.306260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:15:34.019 [2024-11-03 10:10:02.306293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.306364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.019 [2024-11-03 10:10:02.306373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:34.019 [2024-11-03 10:10:02.306379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:34.019 [2024-11-03 10:10:02.306386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.019 [2024-11-03 10:10:02.306482] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:34.019 [2024-11-03 10:10:02.306491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:34.019 [2024-11-03 10:10:02.306497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:34.019 [2024-11-03 10:10:02.306504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.019 [2024-11-03 10:10:02.306511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:34.019 [2024-11-03 10:10:02.306519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:34.019 [2024-11-03 10:10:02.306525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:34.019 [2024-11-03 10:10:02.306533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:34.019 [2024-11-03 10:10:02.306539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:34.019 [2024-11-03 10:10:02.306546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:34.019 [2024-11-03 10:10:02.306552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:34.019 [2024-11-03 10:10:02.306559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:34.019 [2024-11-03 10:10:02.306565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:34.019 [2024-11-03 10:10:02.306574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:34.019 [2024-11-03 10:10:02.306580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:34.019 [2024-11-03 10:10:02.306587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.019 [2024-11-03 10:10:02.306593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:34.019 [2024-11-03 10:10:02.306600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:34.020 [2024-11-03 10:10:02.306610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:34.020 [2024-11-03 10:10:02.306624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:34.020 [2024-11-03 10:10:02.306638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:34.020 [2024-11-03 10:10:02.306646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:34.020 [2024-11-03 10:10:02.306659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:34.020 [2024-11-03 10:10:02.306665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:34.020 [2024-11-03 10:10:02.306681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:34.020 [2024-11-03 10:10:02.306691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:34.020 [2024-11-03 10:10:02.306704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:34.020 [2024-11-03 10:10:02.306710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:34.020 [2024-11-03 10:10:02.306723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:34.020 [2024-11-03 10:10:02.306731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:34.020 [2024-11-03 10:10:02.306736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:34.020 [2024-11-03 10:10:02.306743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:34.020 [2024-11-03 10:10:02.306749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:34.020 [2024-11-03 10:10:02.306756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:34.020 [2024-11-03 10:10:02.306769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:34.020 [2024-11-03 10:10:02.306775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306782] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:34.020 [2024-11-03 10:10:02.306789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:34.020 [2024-11-03 10:10:02.306798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:34.020 [2024-11-03 10:10:02.306804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:34.020 [2024-11-03 10:10:02.306812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:34.020 [2024-11-03 10:10:02.306819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:34.020 [2024-11-03 10:10:02.306826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:34.020 [2024-11-03 10:10:02.306833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:34.020 [2024-11-03 10:10:02.306840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:34.020 [2024-11-03 10:10:02.306847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:34.020 [2024-11-03 10:10:02.306858] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:34.020 [2024-11-03 10:10:02.306879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:34.020 [2024-11-03 10:10:02.306896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:34.020 [2024-11-03 10:10:02.306903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:34.020 [2024-11-03 10:10:02.306910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:34.020 [2024-11-03 10:10:02.306915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:34.020 [2024-11-03 10:10:02.306922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:34.020 [2024-11-03 10:10:02.306927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:34.020 [2024-11-03 10:10:02.306936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:34.020 [2024-11-03 10:10:02.306941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:34.020 [2024-11-03 10:10:02.306948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:34.020 [2024-11-03 10:10:02.306953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:34.020 [2024-11-03 10:10:02.306960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:34.020 [2024-11-03 10:10:02.306965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:34.020 [2024-11-03 10:10:02.306972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:34.020 [2024-11-03 10:10:02.306977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:34.020 [2024-11-03 10:10:02.306984] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:34.020 [2024-11-03 10:10:02.306990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:34.020 [2024-11-03 10:10:02.306998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:34.020 [2024-11-03 10:10:02.307004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:34.020 [2024-11-03 10:10:02.307011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:34.020 [2024-11-03 10:10:02.307017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:34.020 [2024-11-03 10:10:02.307024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.020 [2024-11-03 10:10:02.307031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:34.020 [2024-11-03 10:10:02.307040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:15:34.020 [2024-11-03 10:10:02.307046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.020 [2024-11-03 10:10:02.307118] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:34.020 [2024-11-03 10:10:02.307126] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:36.568 [2024-11-03 10:10:04.811139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.811398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:36.568 [2024-11-03 10:10:04.811424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2504.007 ms 00:15:36.568 [2024-11-03 10:10:04.811434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.568 [2024-11-03 10:10:04.836498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.836580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:36.568 [2024-11-03 10:10:04.836613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.904 ms 00:15:36.568 [2024-11-03 10:10:04.836631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.568 [2024-11-03 10:10:04.836867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.836887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:36.568 [2024-11-03 10:10:04.836909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:15:36.568 [2024-11-03 10:10:04.836926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.568 [2024-11-03 10:10:04.849000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.849037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:36.568 [2024-11-03 10:10:04.849050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.945 ms 00:15:36.568 [2024-11-03 10:10:04.849058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.568 [2024-11-03 10:10:04.849107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.849115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:36.568 [2024-11-03 10:10:04.849126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:15:36.568 [2024-11-03 10:10:04.849134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.568 [2024-11-03 10:10:04.849606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.849621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:36.568 [2024-11-03 10:10:04.849634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:15:36.568 [2024-11-03 10:10:04.849642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.568 [2024-11-03 10:10:04.849784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.849802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:36.568 [2024-11-03 10:10:04.849814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:15:36.568 [2024-11-03 10:10:04.849823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.568 [2024-11-03 10:10:04.856926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.857062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:36.568 [2024-11-03 10:10:04.857082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.056 ms 00:15:36.568 [2024-11-03 10:10:04.857090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.568 [2024-11-03 10:10:04.866066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:36.568 [2024-11-03 10:10:04.883336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.568 [2024-11-03 10:10:04.883462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:36.568 [2024-11-03 10:10:04.883476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.152 ms 00:15:36.568 [2024-11-03 10:10:04.883497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.932425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.932472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:36.830 [2024-11-03 10:10:04.932488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.886 ms 00:15:36.830 [2024-11-03 10:10:04.932501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.932696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.932709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:36.830 [2024-11-03 10:10:04.932730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:15:36.830 [2024-11-03 10:10:04.932743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.935922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.935956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:36.830 [2024-11-03 10:10:04.935966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:15:36.830 [2024-11-03 10:10:04.935979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.938607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.938744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:36.830 [2024-11-03 10:10:04.938760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:15:36.830 [2024-11-03 10:10:04.938771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.939086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.939099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:36.830 [2024-11-03 10:10:04.939107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:15:36.830 [2024-11-03 10:10:04.939130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.962542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.962594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:36.830 [2024-11-03 10:10:04.962604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.385 ms 00:15:36.830 [2024-11-03 10:10:04.962627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.966905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.966939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:36.830 [2024-11-03 10:10:04.966948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.218 ms 00:15:36.830 [2024-11-03 10:10:04.966958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.969602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.969634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:36.830 [2024-11-03 10:10:04.969643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:15:36.830 [2024-11-03 10:10:04.969652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.973061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.973186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:36.830 [2024-11-03 10:10:04.973202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.361 ms 00:15:36.830 [2024-11-03 10:10:04.973214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.973313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.973334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:36.830 [2024-11-03 10:10:04.973356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:36.830 [2024-11-03 10:10:04.973366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.973445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:36.830 [2024-11-03 10:10:04.973456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:36.830 [2024-11-03 10:10:04.973465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:36.830 [2024-11-03 10:10:04.973484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:36.830 [2024-11-03 10:10:04.974503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2678.505 ms, result 0 00:15:36.830 { 00:15:36.830 "name": "ftl0", 00:15:36.830 "uuid": "7454fe60-8bbc-41db-8a78-4560fd27338c" 00:15:36.830 } 00:15:36.830 10:10:04 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:36.830 10:10:04 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:36.830 10:10:04 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:36.830 10:10:04 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:36.830 10:10:04 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:36.830 10:10:04 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:36.830 10:10:04 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:37.092 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:37.092 [ 00:15:37.092 { 00:15:37.092 "name": "ftl0", 00:15:37.092 "aliases": [ 00:15:37.092 "7454fe60-8bbc-41db-8a78-4560fd27338c" 00:15:37.092 ], 00:15:37.092 "product_name": "FTL disk", 00:15:37.092 "block_size": 4096, 00:15:37.092 "num_blocks": 20971520, 00:15:37.092 "uuid": "7454fe60-8bbc-41db-8a78-4560fd27338c", 00:15:37.092 "assigned_rate_limits": { 00:15:37.092 "rw_ios_per_sec": 0, 00:15:37.092 "rw_mbytes_per_sec": 0, 00:15:37.092 "r_mbytes_per_sec": 0, 00:15:37.092 "w_mbytes_per_sec": 0 00:15:37.092 }, 00:15:37.092 "claimed": false, 00:15:37.092 "zoned": false, 00:15:37.092 "supported_io_types": { 00:15:37.092 "read": true, 00:15:37.092 "write": true, 00:15:37.092 "unmap": true, 00:15:37.092 "flush": true, 00:15:37.092 "reset": false, 00:15:37.092 "nvme_admin": false, 00:15:37.092 "nvme_io": false, 00:15:37.092 "nvme_io_md": false, 00:15:37.092 "write_zeroes": true, 00:15:37.092 "zcopy": false, 00:15:37.092 "get_zone_info": false, 00:15:37.092 "zone_management": false, 00:15:37.092 "zone_append": false, 00:15:37.092 "compare": false, 00:15:37.092 "compare_and_write": false, 00:15:37.092 "abort": false, 00:15:37.092 "seek_hole": false, 00:15:37.092 "seek_data": false, 00:15:37.092 "copy": false, 00:15:37.092 "nvme_iov_md": false 00:15:37.092 }, 00:15:37.092 "driver_specific": { 00:15:37.092 "ftl": { 00:15:37.092 "base_bdev": "51bf08d3-566a-429b-8ad8-f486b5f57191", 00:15:37.092 "cache": "nvc0n1p0" 00:15:37.092 } 00:15:37.092 } 00:15:37.092 } 00:15:37.092 ] 00:15:37.092 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:37.092 10:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:37.092 10:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:37.352 10:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:37.352 10:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:37.614 [2024-11-03 10:10:05.780791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.614 [2024-11-03 10:10:05.780831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:37.614 [2024-11-03 10:10:05.780844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:37.614 [2024-11-03 10:10:05.780852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.614 [2024-11-03 10:10:05.780890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:37.614 [2024-11-03 10:10:05.781474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.614 [2024-11-03 10:10:05.781514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:37.614 [2024-11-03 10:10:05.781524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:15:37.614 [2024-11-03 10:10:05.781533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.614 [2024-11-03 10:10:05.781969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.614 [2024-11-03 10:10:05.781987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:37.614 [2024-11-03 10:10:05.781996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:15:37.614 [2024-11-03 10:10:05.782007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.614 [2024-11-03 10:10:05.785260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.614 [2024-11-03 10:10:05.785281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:37.614 [2024-11-03 10:10:05.785291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:15:37.614 [2024-11-03 10:10:05.785311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.614 [2024-11-03 10:10:05.791434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.614 [2024-11-03 10:10:05.791553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:37.614 [2024-11-03 10:10:05.791568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.096 ms 00:15:37.614 [2024-11-03 10:10:05.791577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.614 [2024-11-03 10:10:05.793861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.614 [2024-11-03 10:10:05.793905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:37.614 [2024-11-03 10:10:05.793914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:15:37.614 [2024-11-03 10:10:05.793923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.614 [2024-11-03 10:10:05.798947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.614 [2024-11-03 10:10:05.799060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:37.614 [2024-11-03 10:10:05.799075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.983 ms 00:15:37.614 [2024-11-03 10:10:05.799084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.614 [2024-11-03 10:10:05.799271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.614 [2024-11-03 10:10:05.799284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:37.614 [2024-11-03 10:10:05.799292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:15:37.614 [2024-11-03 10:10:05.799301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.615 [2024-11-03 10:10:05.801626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.615 [2024-11-03 10:10:05.801660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:37.615 [2024-11-03 10:10:05.801669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:15:37.615 [2024-11-03 10:10:05.801677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.615 [2024-11-03 10:10:05.803112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.615 [2024-11-03 10:10:05.803148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:37.615 [2024-11-03 10:10:05.803156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:15:37.615 [2024-11-03 10:10:05.803164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.615 [2024-11-03 10:10:05.804753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.615 [2024-11-03 10:10:05.804786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:37.615 [2024-11-03 10:10:05.804794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:15:37.615 [2024-11-03 10:10:05.804803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.615 [2024-11-03 10:10:05.806207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.615 [2024-11-03 10:10:05.806320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:37.615 [2024-11-03 10:10:05.806333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:15:37.615 [2024-11-03 10:10:05.806342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.615 [2024-11-03 10:10:05.806378] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:37.615 [2024-11-03 10:10:05.806405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.806995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.807003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.807012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.807019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.807029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.807036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.807045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:37.615 [2024-11-03 10:10:05.807052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:37.616 [2024-11-03 10:10:05.807286] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:37.616 [2024-11-03 10:10:05.807294] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7454fe60-8bbc-41db-8a78-4560fd27338c 00:15:37.616 [2024-11-03 10:10:05.807304] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:37.616 [2024-11-03 10:10:05.807312] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:37.616 [2024-11-03 10:10:05.807322] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:37.616 [2024-11-03 10:10:05.807330] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:37.616 [2024-11-03 10:10:05.807338] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:37.616 [2024-11-03 10:10:05.807346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:37.616 [2024-11-03 10:10:05.807355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:37.616 [2024-11-03 10:10:05.807362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:37.616 [2024-11-03 10:10:05.807370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:37.616 [2024-11-03 10:10:05.807377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.616 [2024-11-03 10:10:05.807386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:37.616 [2024-11-03 10:10:05.807394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:15:37.616 [2024-11-03 10:10:05.807402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.809270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.616 [2024-11-03 10:10:05.809292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:37.616 [2024-11-03 10:10:05.809301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.836 ms 00:15:37.616 [2024-11-03 10:10:05.809324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.809447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:37.616 [2024-11-03 10:10:05.809458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:37.616 [2024-11-03 10:10:05.809466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:15:37.616 [2024-11-03 10:10:05.809475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.815922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.815957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:37.616 [2024-11-03 10:10:05.815968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.815977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.816038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.816048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:37.616 [2024-11-03 10:10:05.816056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.816066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.816165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.816182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:37.616 [2024-11-03 10:10:05.816200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.816209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.816249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.816261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:37.616 [2024-11-03 10:10:05.816269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.816279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.828261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.828301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:37.616 [2024-11-03 10:10:05.828310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.828330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.837993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.838138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:37.616 [2024-11-03 10:10:05.838164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.838174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.838260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.838276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:37.616 [2024-11-03 10:10:05.838287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.838296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.838367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.838377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:37.616 [2024-11-03 10:10:05.838386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.838395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.838480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.838491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:37.616 [2024-11-03 10:10:05.838499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.838511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.838566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.838578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:37.616 [2024-11-03 10:10:05.838595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.838605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.838657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.838669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:37.616 [2024-11-03 10:10:05.838677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.838688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.838756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:37.616 [2024-11-03 10:10:05.838768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:37.616 [2024-11-03 10:10:05.838777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:37.616 [2024-11-03 10:10:05.838787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:37.616 [2024-11-03 10:10:05.838959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.131 ms, result 0 00:15:37.616 true 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83974 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 83974 ']' 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 83974 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83974 00:15:37.616 killing process with pid 83974 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83974' 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 83974 00:15:37.616 10:10:05 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 83974 00:15:42.936 10:10:10 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:42.936 10:10:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:42.936 10:10:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.937 10:10:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:42.937 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:42.937 fio-3.35 00:15:42.937 Starting 1 thread 00:15:49.534 00:15:49.534 test: (groupid=0, jobs=1): err= 0: pid=84139: Sun Nov 3 10:10:16 2024 00:15:49.534 read: IOPS=727, BW=48.3MiB/s (50.7MB/s)(255MiB/5266msec) 00:15:49.534 slat (nsec): min=4006, max=51248, avg=6996.19, stdev=3683.92 00:15:49.534 clat (usec): min=271, max=3796, avg=618.62, stdev=218.78 00:15:49.534 lat (usec): min=284, max=3800, avg=625.62, stdev=220.55 00:15:49.534 clat percentiles (usec): 00:15:49.534 | 1.00th=[ 302], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 453], 00:15:49.534 | 30.00th=[ 474], 40.00th=[ 537], 50.00th=[ 594], 60.00th=[ 611], 00:15:49.534 | 70.00th=[ 685], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 963], 00:15:49.534 | 99.00th=[ 1074], 99.50th=[ 1172], 99.90th=[ 2442], 99.95th=[ 3163], 00:15:49.534 | 99.99th=[ 3785] 00:15:49.534 write: IOPS=732, BW=48.7MiB/s (51.0MB/s)(256MiB/5261msec); 0 zone resets 00:15:49.534 slat (nsec): min=14675, max=89833, avg=21932.56, stdev=6471.55 00:15:49.534 clat (usec): min=280, max=1962, avg=707.67, stdev=246.56 00:15:49.534 lat (usec): min=298, max=2002, avg=729.60, stdev=250.28 00:15:49.534 clat percentiles (usec): 00:15:49.534 | 1.00th=[ 322], 5.00th=[ 355], 10.00th=[ 392], 20.00th=[ 553], 00:15:49.534 | 30.00th=[ 562], 40.00th=[ 627], 50.00th=[ 644], 60.00th=[ 701], 00:15:49.534 | 70.00th=[ 775], 80.00th=[ 963], 90.00th=[ 996], 95.00th=[ 1057], 00:15:49.534 | 99.00th=[ 1663], 99.50th=[ 1696], 99.90th=[ 1811], 99.95th=[ 1876], 00:15:49.534 | 99.99th=[ 1958] 00:15:49.534 bw ( KiB/s): min=33048, max=64872, per=99.22%, avg=49449.60, stdev=12084.41, samples=10 00:15:49.534 iops : min= 486, max= 954, avg=727.20, stdev=177.71, samples=10 00:15:49.534 lat (usec) : 500=24.32%, 750=45.87%, 1000=24.01% 00:15:49.534 lat (msec) : 2=5.74%, 4=0.07% 00:15:49.534 cpu : usr=99.01%, sys=0.13%, ctx=11, majf=0, minf=1326 00:15:49.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:49.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.534 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:49.534 00:15:49.534 Run status group 0 (all jobs): 00:15:49.534 READ: bw=48.3MiB/s (50.7MB/s), 48.3MiB/s-48.3MiB/s (50.7MB/s-50.7MB/s), io=255MiB (267MB), run=5266-5266msec 00:15:49.534 WRITE: bw=48.7MiB/s (51.0MB/s), 48.7MiB/s-48.7MiB/s (51.0MB/s-51.0MB/s), io=256MiB (269MB), run=5261-5261msec 00:15:49.534 ----------------------------------------------------- 00:15:49.534 Suppressions used: 00:15:49.534 count bytes template 00:15:49.534 1 5 /usr/src/fio/parse.c 00:15:49.534 1 8 libtcmalloc_minimal.so 00:15:49.534 1 904 libcrypto.so 00:15:49.534 ----------------------------------------------------- 00:15:49.534 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:49.534 10:10:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:49.534 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:49.534 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:49.534 fio-3.35 00:15:49.534 Starting 2 threads 00:16:16.102 00:16:16.102 first_half: (groupid=0, jobs=1): err= 0: pid=84245: Sun Nov 3 10:10:42 2024 00:16:16.102 read: IOPS=2739, BW=10.7MiB/s (11.2MB/s)(255MiB/23811msec) 00:16:16.102 slat (nsec): min=3034, max=29920, avg=4535.35, stdev=1273.43 00:16:16.102 clat (usec): min=683, max=395910, avg=33592.60, stdev=21573.86 00:16:16.102 lat (usec): min=690, max=395915, avg=33597.14, stdev=21573.80 00:16:16.102 clat percentiles (msec): 00:16:16.102 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 27], 20.00th=[ 29], 00:16:16.102 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:16.102 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 41], 00:16:16.102 | 99.00th=[ 153], 99.50th=[ 184], 99.90th=[ 245], 99.95th=[ 342], 00:16:16.102 | 99.99th=[ 388] 00:16:16.102 write: IOPS=3208, BW=12.5MiB/s (13.1MB/s)(256MiB/20428msec); 0 zone resets 00:16:16.102 slat (usec): min=3, max=1237, avg= 6.29, stdev= 6.25 00:16:16.102 clat (usec): min=353, max=118456, avg=13034.28, stdev=22033.74 00:16:16.102 lat (usec): min=360, max=118463, avg=13040.57, stdev=22034.09 00:16:16.102 clat percentiles (usec): 00:16:16.102 | 1.00th=[ 685], 5.00th=[ 914], 10.00th=[ 1123], 20.00th=[ 1795], 00:16:16.102 | 30.00th=[ 3261], 40.00th=[ 4686], 50.00th=[ 5604], 60.00th=[ 6456], 00:16:16.102 | 70.00th=[ 8291], 80.00th=[ 12387], 90.00th=[ 31065], 95.00th=[ 73925], 00:16:16.102 | 99.00th=[100140], 99.50th=[105382], 99.90th=[115868], 99.95th=[116917], 00:16:16.102 | 99.99th=[117965] 00:16:16.102 bw ( KiB/s): min= 976, max=40536, per=78.56%, avg=20162.81, stdev=10167.78, samples=26 00:16:16.102 iops : min= 244, max=10134, avg=5040.69, stdev=2541.94, samples=26 00:16:16.102 lat (usec) : 500=0.02%, 750=0.98%, 1000=2.50% 00:16:16.102 lat (msec) : 2=7.50%, 4=6.40%, 10=22.00%, 20=6.90%, 50=47.38% 00:16:16.102 lat (msec) : 100=4.74%, 250=1.53%, 500=0.05% 00:16:16.102 cpu : usr=99.25%, sys=0.13%, ctx=60, majf=0, minf=5537 00:16:16.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:16.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.102 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.102 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.102 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.102 second_half: (groupid=0, jobs=1): err= 0: pid=84246: Sun Nov 3 10:10:42 2024 00:16:16.102 read: IOPS=2758, BW=10.8MiB/s (11.3MB/s)(254MiB/23612msec) 00:16:16.102 slat (nsec): min=3004, max=39394, avg=3855.58, stdev=871.01 00:16:16.102 clat (usec): min=653, max=403418, avg=34538.85, stdev=19813.81 00:16:16.102 lat (usec): min=659, max=403425, avg=34542.71, stdev=19813.80 00:16:16.102 clat percentiles (msec): 00:16:16.102 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:16:16.102 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:16.102 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 44], 00:16:16.102 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 215], 99.95th=[ 257], 00:16:16.102 | 99.99th=[ 397] 00:16:16.102 write: IOPS=4240, BW=16.6MiB/s (17.4MB/s)(256MiB/15456msec); 0 zone resets 00:16:16.102 slat (usec): min=3, max=2309, avg= 6.25, stdev=11.09 00:16:16.102 clat (usec): min=416, max=117648, avg=11767.96, stdev=21539.72 00:16:16.102 lat (usec): min=420, max=117654, avg=11774.21, stdev=21539.91 00:16:16.102 clat percentiles (usec): 00:16:16.102 | 1.00th=[ 725], 5.00th=[ 947], 10.00th=[ 1106], 20.00th=[ 1467], 00:16:16.102 | 30.00th=[ 1991], 40.00th=[ 3326], 50.00th=[ 4686], 60.00th=[ 5735], 00:16:16.102 | 70.00th=[ 7177], 80.00th=[ 11994], 90.00th=[ 20841], 95.00th=[ 72877], 00:16:16.102 | 99.00th=[101188], 99.50th=[104334], 99.90th=[112722], 99.95th=[114820], 00:16:16.102 | 99.99th=[116917] 00:16:16.102 bw ( KiB/s): min= 5464, max=43080, per=100.00%, avg=26217.10, stdev=10380.64, samples=20 00:16:16.102 iops : min= 1368, max=10770, avg=6554.25, stdev=2594.86, samples=20 00:16:16.102 lat (usec) : 500=0.01%, 750=0.66%, 1000=2.58% 00:16:16.102 lat (msec) : 2=12.05%, 4=7.40%, 10=15.49%, 20=7.87%, 50=47.38% 00:16:16.102 lat (msec) : 100=4.84%, 250=1.69%, 500=0.03% 00:16:16.102 cpu : usr=99.36%, sys=0.14%, ctx=43, majf=0, minf=5597 00:16:16.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:16.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.102 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.102 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.102 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.102 00:16:16.102 Run status group 0 (all jobs): 00:16:16.102 READ: bw=21.4MiB/s (22.4MB/s), 10.7MiB/s-10.8MiB/s (11.2MB/s-11.3MB/s), io=509MiB (534MB), run=23612-23811msec 00:16:16.102 WRITE: bw=25.1MiB/s (26.3MB/s), 12.5MiB/s-16.6MiB/s (13.1MB/s-17.4MB/s), io=512MiB (537MB), run=15456-20428msec 00:16:16.102 ----------------------------------------------------- 00:16:16.102 Suppressions used: 00:16:16.102 count bytes template 00:16:16.102 2 10 /usr/src/fio/parse.c 00:16:16.103 2 192 /usr/src/fio/iolog.c 00:16:16.103 1 8 libtcmalloc_minimal.so 00:16:16.103 1 904 libcrypto.so 00:16:16.103 ----------------------------------------------------- 00:16:16.103 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:16.103 10:10:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:16.103 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:16.103 fio-3.35 00:16:16.103 Starting 1 thread 00:16:34.205 00:16:34.205 test: (groupid=0, jobs=1): err= 0: pid=84550: Sun Nov 3 10:11:01 2024 00:16:34.205 read: IOPS=6394, BW=25.0MiB/s (26.2MB/s)(255MiB/10196msec) 00:16:34.205 slat (nsec): min=2960, max=22467, avg=4665.15, stdev=1156.58 00:16:34.205 clat (usec): min=877, max=45902, avg=20007.62, stdev=2419.10 00:16:34.205 lat (usec): min=886, max=45907, avg=20012.28, stdev=2419.07 00:16:34.205 clat percentiles (usec): 00:16:34.205 | 1.00th=[15270], 5.00th=[16712], 10.00th=[17433], 20.00th=[18482], 00:16:34.205 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19792], 60.00th=[20055], 00:16:34.205 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22938], 95.00th=[24511], 00:16:34.205 | 99.00th=[27657], 99.50th=[28967], 99.90th=[33817], 99.95th=[38011], 00:16:34.205 | 99.99th=[44303] 00:16:34.205 write: IOPS=9094, BW=35.5MiB/s (37.3MB/s)(256MiB/7206msec); 0 zone resets 00:16:34.205 slat (usec): min=4, max=583, avg= 7.01, stdev= 6.65 00:16:34.205 clat (usec): min=566, max=81204, avg=14004.27, stdev=16103.35 00:16:34.205 lat (usec): min=571, max=81209, avg=14011.28, stdev=16103.50 00:16:34.205 clat percentiles (usec): 00:16:34.205 | 1.00th=[ 1172], 5.00th=[ 1450], 10.00th=[ 1631], 20.00th=[ 1909], 00:16:34.205 | 30.00th=[ 2245], 40.00th=[ 3228], 50.00th=[ 8848], 60.00th=[11469], 00:16:34.205 | 70.00th=[15139], 80.00th=[18744], 90.00th=[46924], 95.00th=[50070], 00:16:34.205 | 99.00th=[56361], 99.50th=[58459], 99.90th=[67634], 99.95th=[68682], 00:16:34.205 | 99.99th=[77071] 00:16:34.205 bw ( KiB/s): min=12920, max=50960, per=96.08%, avg=34952.53, stdev=9112.39, samples=15 00:16:34.205 iops : min= 3230, max=12740, avg=8738.13, stdev=2278.10, samples=15 00:16:34.205 lat (usec) : 750=0.04%, 1000=0.16% 00:16:34.205 lat (msec) : 2=11.49%, 4=9.02%, 10=6.60%, 20=42.13%, 50=28.03% 00:16:34.205 lat (msec) : 100=2.53% 00:16:34.205 cpu : usr=99.07%, sys=0.19%, ctx=48, majf=0, minf=5577 00:16:34.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:34.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.206 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.206 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.206 00:16:34.206 Run status group 0 (all jobs): 00:16:34.206 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=255MiB (267MB), run=10196-10196msec 00:16:34.206 WRITE: bw=35.5MiB/s (37.3MB/s), 35.5MiB/s-35.5MiB/s (37.3MB/s-37.3MB/s), io=256MiB (268MB), run=7206-7206msec 00:16:34.464 ----------------------------------------------------- 00:16:34.464 Suppressions used: 00:16:34.464 count bytes template 00:16:34.464 1 5 /usr/src/fio/parse.c 00:16:34.464 2 192 /usr/src/fio/iolog.c 00:16:34.464 1 8 libtcmalloc_minimal.so 00:16:34.464 1 904 libcrypto.so 00:16:34.464 ----------------------------------------------------- 00:16:34.464 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:34.464 Remove shared memory files 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69460 /dev/shm/spdk_tgt_trace.pid82921 00:16:34.464 10:11:02 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:34.724 10:11:02 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:34.724 ************************************ 00:16:34.724 END TEST ftl_fio_basic 00:16:34.724 ************************************ 00:16:34.724 00:16:34.724 real 1m4.235s 00:16:34.724 user 2m22.031s 00:16:34.724 sys 0m2.993s 00:16:34.724 10:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.724 10:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:34.724 10:11:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:34.724 10:11:02 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:34.724 10:11:02 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.724 10:11:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:34.724 ************************************ 00:16:34.724 START TEST ftl_bdevperf 00:16:34.724 ************************************ 00:16:34.724 10:11:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:34.724 * Looking for test storage... 00:16:34.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:34.724 10:11:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:34.724 10:11:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:34.724 10:11:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.724 --rc genhtml_branch_coverage=1 00:16:34.724 --rc genhtml_function_coverage=1 00:16:34.724 --rc genhtml_legend=1 00:16:34.724 --rc geninfo_all_blocks=1 00:16:34.724 --rc geninfo_unexecuted_blocks=1 00:16:34.724 00:16:34.724 ' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.724 --rc genhtml_branch_coverage=1 00:16:34.724 --rc genhtml_function_coverage=1 00:16:34.724 --rc genhtml_legend=1 00:16:34.724 --rc geninfo_all_blocks=1 00:16:34.724 --rc geninfo_unexecuted_blocks=1 00:16:34.724 00:16:34.724 ' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.724 --rc genhtml_branch_coverage=1 00:16:34.724 --rc genhtml_function_coverage=1 00:16:34.724 --rc genhtml_legend=1 00:16:34.724 --rc geninfo_all_blocks=1 00:16:34.724 --rc geninfo_unexecuted_blocks=1 00:16:34.724 00:16:34.724 ' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.724 --rc genhtml_branch_coverage=1 00:16:34.724 --rc genhtml_function_coverage=1 00:16:34.724 --rc genhtml_legend=1 00:16:34.724 --rc geninfo_all_blocks=1 00:16:34.724 --rc geninfo_unexecuted_blocks=1 00:16:34.724 00:16:34.724 ' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:34.724 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84821 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84821 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84821 ']' 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.725 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:34.984 [2024-11-03 10:11:03.114285] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:34.984 [2024-11-03 10:11:03.114402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84821 ] 00:16:34.984 [2024-11-03 10:11:03.246794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.984 [2024-11-03 10:11:03.289447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.920 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.920 10:11:03 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:35.920 10:11:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:35.920 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:35.920 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:35.920 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:35.920 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:35.920 10:11:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:35.920 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:35.920 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:35.920 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:35.920 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:35.920 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:35.920 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:35.920 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:35.920 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:36.178 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:36.178 { 00:16:36.178 "name": "nvme0n1", 00:16:36.178 "aliases": [ 00:16:36.178 "96e694e5-0e06-45e5-998b-2965ad1fbedd" 00:16:36.178 ], 00:16:36.178 "product_name": "NVMe disk", 00:16:36.178 "block_size": 4096, 00:16:36.178 "num_blocks": 1310720, 00:16:36.178 "uuid": "96e694e5-0e06-45e5-998b-2965ad1fbedd", 00:16:36.178 "numa_id": -1, 00:16:36.178 "assigned_rate_limits": { 00:16:36.178 "rw_ios_per_sec": 0, 00:16:36.178 "rw_mbytes_per_sec": 0, 00:16:36.178 "r_mbytes_per_sec": 0, 00:16:36.178 "w_mbytes_per_sec": 0 00:16:36.178 }, 00:16:36.178 "claimed": true, 00:16:36.178 "claim_type": "read_many_write_one", 00:16:36.178 "zoned": false, 00:16:36.178 "supported_io_types": { 00:16:36.178 "read": true, 00:16:36.178 "write": true, 00:16:36.178 "unmap": true, 00:16:36.178 "flush": true, 00:16:36.178 "reset": true, 00:16:36.178 "nvme_admin": true, 00:16:36.178 "nvme_io": true, 00:16:36.178 "nvme_io_md": false, 00:16:36.178 "write_zeroes": true, 00:16:36.178 "zcopy": false, 00:16:36.178 "get_zone_info": false, 00:16:36.178 "zone_management": false, 00:16:36.178 "zone_append": false, 00:16:36.178 "compare": true, 00:16:36.178 "compare_and_write": false, 00:16:36.178 "abort": true, 00:16:36.178 "seek_hole": false, 00:16:36.178 "seek_data": false, 00:16:36.178 "copy": true, 00:16:36.178 "nvme_iov_md": false 00:16:36.178 }, 00:16:36.178 "driver_specific": { 00:16:36.178 "nvme": [ 00:16:36.178 { 00:16:36.178 "pci_address": "0000:00:11.0", 00:16:36.178 "trid": { 00:16:36.178 "trtype": "PCIe", 00:16:36.178 "traddr": "0000:00:11.0" 00:16:36.178 }, 00:16:36.178 "ctrlr_data": { 00:16:36.178 "cntlid": 0, 00:16:36.178 "vendor_id": "0x1b36", 00:16:36.178 "model_number": "QEMU NVMe Ctrl", 00:16:36.178 "serial_number": "12341", 00:16:36.178 "firmware_revision": "8.0.0", 00:16:36.178 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:36.178 "oacs": { 00:16:36.178 "security": 0, 00:16:36.178 "format": 1, 00:16:36.178 "firmware": 0, 00:16:36.178 "ns_manage": 1 00:16:36.178 }, 00:16:36.178 "multi_ctrlr": false, 00:16:36.178 "ana_reporting": false 00:16:36.178 }, 00:16:36.178 "vs": { 00:16:36.178 "nvme_version": "1.4" 00:16:36.178 }, 00:16:36.178 "ns_data": { 00:16:36.178 "id": 1, 00:16:36.178 "can_share": false 00:16:36.178 } 00:16:36.178 } 00:16:36.178 ], 00:16:36.178 "mp_policy": "active_passive" 00:16:36.178 } 00:16:36.178 } 00:16:36.178 ]' 00:16:36.178 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:36.178 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:36.178 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:36.178 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:36.178 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:36.178 10:11:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:36.178 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:36.437 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:36.437 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:36.437 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:36.437 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:36.437 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=8e611ff1-0f71-4697-81cc-477c73f4ca79 00:16:36.437 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:36.437 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e611ff1-0f71-4697-81cc-477c73f4ca79 00:16:36.695 10:11:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:36.954 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=bdd6dbcf-a177-4ee0-aff1-a4ea6943180b 00:16:36.954 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bdd6dbcf-a177-4ee0-aff1-a4ea6943180b 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:37.215 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:37.518 { 00:16:37.518 "name": "74b322ad-112f-4058-96c3-567ed6de861c", 00:16:37.518 "aliases": [ 00:16:37.518 "lvs/nvme0n1p0" 00:16:37.518 ], 00:16:37.518 "product_name": "Logical Volume", 00:16:37.518 "block_size": 4096, 00:16:37.518 "num_blocks": 26476544, 00:16:37.518 "uuid": "74b322ad-112f-4058-96c3-567ed6de861c", 00:16:37.518 "assigned_rate_limits": { 00:16:37.518 "rw_ios_per_sec": 0, 00:16:37.518 "rw_mbytes_per_sec": 0, 00:16:37.518 "r_mbytes_per_sec": 0, 00:16:37.518 "w_mbytes_per_sec": 0 00:16:37.518 }, 00:16:37.518 "claimed": false, 00:16:37.518 "zoned": false, 00:16:37.518 "supported_io_types": { 00:16:37.518 "read": true, 00:16:37.518 "write": true, 00:16:37.518 "unmap": true, 00:16:37.518 "flush": false, 00:16:37.518 "reset": true, 00:16:37.518 "nvme_admin": false, 00:16:37.518 "nvme_io": false, 00:16:37.518 "nvme_io_md": false, 00:16:37.518 "write_zeroes": true, 00:16:37.518 "zcopy": false, 00:16:37.518 "get_zone_info": false, 00:16:37.518 "zone_management": false, 00:16:37.518 "zone_append": false, 00:16:37.518 "compare": false, 00:16:37.518 "compare_and_write": false, 00:16:37.518 "abort": false, 00:16:37.518 "seek_hole": true, 00:16:37.518 "seek_data": true, 00:16:37.518 "copy": false, 00:16:37.518 "nvme_iov_md": false 00:16:37.518 }, 00:16:37.518 "driver_specific": { 00:16:37.518 "lvol": { 00:16:37.518 "lvol_store_uuid": "bdd6dbcf-a177-4ee0-aff1-a4ea6943180b", 00:16:37.518 "base_bdev": "nvme0n1", 00:16:37.518 "thin_provision": true, 00:16:37.518 "num_allocated_clusters": 0, 00:16:37.518 "snapshot": false, 00:16:37.518 "clone": false, 00:16:37.518 "esnap_clone": false 00:16:37.518 } 00:16:37.518 } 00:16:37.518 } 00:16:37.518 ]' 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:37.518 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:37.790 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:37.790 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:37.790 10:11:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.790 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.790 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:37.790 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:37.790 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:37.790 10:11:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74b322ad-112f-4058-96c3-567ed6de861c 00:16:37.790 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:37.790 { 00:16:37.790 "name": "74b322ad-112f-4058-96c3-567ed6de861c", 00:16:37.790 "aliases": [ 00:16:37.790 "lvs/nvme0n1p0" 00:16:37.790 ], 00:16:37.790 "product_name": "Logical Volume", 00:16:37.790 "block_size": 4096, 00:16:37.790 "num_blocks": 26476544, 00:16:37.790 "uuid": "74b322ad-112f-4058-96c3-567ed6de861c", 00:16:37.790 "assigned_rate_limits": { 00:16:37.790 "rw_ios_per_sec": 0, 00:16:37.790 "rw_mbytes_per_sec": 0, 00:16:37.790 "r_mbytes_per_sec": 0, 00:16:37.790 "w_mbytes_per_sec": 0 00:16:37.790 }, 00:16:37.790 "claimed": false, 00:16:37.790 "zoned": false, 00:16:37.790 "supported_io_types": { 00:16:37.790 "read": true, 00:16:37.790 "write": true, 00:16:37.790 "unmap": true, 00:16:37.790 "flush": false, 00:16:37.790 "reset": true, 00:16:37.790 "nvme_admin": false, 00:16:37.790 "nvme_io": false, 00:16:37.790 "nvme_io_md": false, 00:16:37.790 "write_zeroes": true, 00:16:37.790 "zcopy": false, 00:16:37.790 "get_zone_info": false, 00:16:37.790 "zone_management": false, 00:16:37.790 "zone_append": false, 00:16:37.790 "compare": false, 00:16:37.790 "compare_and_write": false, 00:16:37.790 "abort": false, 00:16:37.790 "seek_hole": true, 00:16:37.790 "seek_data": true, 00:16:37.790 "copy": false, 00:16:37.790 "nvme_iov_md": false 00:16:37.790 }, 00:16:37.790 "driver_specific": { 00:16:37.790 "lvol": { 00:16:37.790 "lvol_store_uuid": "bdd6dbcf-a177-4ee0-aff1-a4ea6943180b", 00:16:37.790 "base_bdev": "nvme0n1", 00:16:37.790 "thin_provision": true, 00:16:37.790 "num_allocated_clusters": 0, 00:16:37.790 "snapshot": false, 00:16:37.790 "clone": false, 00:16:37.790 "esnap_clone": false 00:16:37.790 } 00:16:37.790 } 00:16:37.790 } 00:16:37.790 ]' 00:16:37.790 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:37.790 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:37.790 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 74b322ad-112f-4058-96c3-567ed6de861c 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=74b322ad-112f-4058-96c3-567ed6de861c 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:38.049 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74b322ad-112f-4058-96c3-567ed6de861c 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:38.307 { 00:16:38.307 "name": "74b322ad-112f-4058-96c3-567ed6de861c", 00:16:38.307 "aliases": [ 00:16:38.307 "lvs/nvme0n1p0" 00:16:38.307 ], 00:16:38.307 "product_name": "Logical Volume", 00:16:38.307 "block_size": 4096, 00:16:38.307 "num_blocks": 26476544, 00:16:38.307 "uuid": "74b322ad-112f-4058-96c3-567ed6de861c", 00:16:38.307 "assigned_rate_limits": { 00:16:38.307 "rw_ios_per_sec": 0, 00:16:38.307 "rw_mbytes_per_sec": 0, 00:16:38.307 "r_mbytes_per_sec": 0, 00:16:38.307 "w_mbytes_per_sec": 0 00:16:38.307 }, 00:16:38.307 "claimed": false, 00:16:38.307 "zoned": false, 00:16:38.307 "supported_io_types": { 00:16:38.307 "read": true, 00:16:38.307 "write": true, 00:16:38.307 "unmap": true, 00:16:38.307 "flush": false, 00:16:38.307 "reset": true, 00:16:38.307 "nvme_admin": false, 00:16:38.307 "nvme_io": false, 00:16:38.307 "nvme_io_md": false, 00:16:38.307 "write_zeroes": true, 00:16:38.307 "zcopy": false, 00:16:38.307 "get_zone_info": false, 00:16:38.307 "zone_management": false, 00:16:38.307 "zone_append": false, 00:16:38.307 "compare": false, 00:16:38.307 "compare_and_write": false, 00:16:38.307 "abort": false, 00:16:38.307 "seek_hole": true, 00:16:38.307 "seek_data": true, 00:16:38.307 "copy": false, 00:16:38.307 "nvme_iov_md": false 00:16:38.307 }, 00:16:38.307 "driver_specific": { 00:16:38.307 "lvol": { 00:16:38.307 "lvol_store_uuid": "bdd6dbcf-a177-4ee0-aff1-a4ea6943180b", 00:16:38.307 "base_bdev": "nvme0n1", 00:16:38.307 "thin_provision": true, 00:16:38.307 "num_allocated_clusters": 0, 00:16:38.307 "snapshot": false, 00:16:38.307 "clone": false, 00:16:38.307 "esnap_clone": false 00:16:38.307 } 00:16:38.307 } 00:16:38.307 } 00:16:38.307 ]' 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:38.307 10:11:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 74b322ad-112f-4058-96c3-567ed6de861c -c nvc0n1p0 --l2p_dram_limit 20 00:16:38.567 [2024-11-03 10:11:06.825419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.825464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:38.567 [2024-11-03 10:11:06.825477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:38.567 [2024-11-03 10:11:06.825486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.825527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.825534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:38.567 [2024-11-03 10:11:06.825544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:38.567 [2024-11-03 10:11:06.825550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.825567] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:38.567 [2024-11-03 10:11:06.825744] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:38.567 [2024-11-03 10:11:06.825758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.825766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:38.567 [2024-11-03 10:11:06.825774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:16:38.567 [2024-11-03 10:11:06.825781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.825806] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1288131a-1fe4-4816-8159-d8067d518b71 00:16:38.567 [2024-11-03 10:11:06.827055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.827079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:38.567 [2024-11-03 10:11:06.827087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:38.567 [2024-11-03 10:11:06.827096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.833943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.833972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:38.567 [2024-11-03 10:11:06.833982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.803 ms 00:16:38.567 [2024-11-03 10:11:06.833992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.834083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.834093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:38.567 [2024-11-03 10:11:06.834100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:38.567 [2024-11-03 10:11:06.834110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.834140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.834152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:38.567 [2024-11-03 10:11:06.834159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:38.567 [2024-11-03 10:11:06.834166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.834186] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:38.567 [2024-11-03 10:11:06.835831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.835855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:38.567 [2024-11-03 10:11:06.835864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:16:38.567 [2024-11-03 10:11:06.835871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.835899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.835906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:38.567 [2024-11-03 10:11:06.835915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:38.567 [2024-11-03 10:11:06.835922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.835935] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:38.567 [2024-11-03 10:11:06.836050] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:38.567 [2024-11-03 10:11:06.836063] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:38.567 [2024-11-03 10:11:06.836087] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:38.567 [2024-11-03 10:11:06.836100] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:38.567 [2024-11-03 10:11:06.836108] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:38.567 [2024-11-03 10:11:06.836115] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:38.567 [2024-11-03 10:11:06.836121] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:38.567 [2024-11-03 10:11:06.836129] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:38.567 [2024-11-03 10:11:06.836138] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:38.567 [2024-11-03 10:11:06.836149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.836155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:38.567 [2024-11-03 10:11:06.836166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:16:38.567 [2024-11-03 10:11:06.836171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.836247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.567 [2024-11-03 10:11:06.836254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:38.567 [2024-11-03 10:11:06.836262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:38.567 [2024-11-03 10:11:06.836268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.567 [2024-11-03 10:11:06.836344] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:38.567 [2024-11-03 10:11:06.836352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:38.567 [2024-11-03 10:11:06.836361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:38.567 [2024-11-03 10:11:06.836369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.567 [2024-11-03 10:11:06.836380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:38.567 [2024-11-03 10:11:06.836385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:38.567 [2024-11-03 10:11:06.836392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:38.567 [2024-11-03 10:11:06.836398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:38.567 [2024-11-03 10:11:06.836406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:38.567 [2024-11-03 10:11:06.836411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:38.567 [2024-11-03 10:11:06.836418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:38.567 [2024-11-03 10:11:06.836423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:38.567 [2024-11-03 10:11:06.836433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:38.567 [2024-11-03 10:11:06.836439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:38.568 [2024-11-03 10:11:06.836446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:38.568 [2024-11-03 10:11:06.836451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:38.568 [2024-11-03 10:11:06.836465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:38.568 [2024-11-03 10:11:06.836471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:38.568 [2024-11-03 10:11:06.836485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:38.568 [2024-11-03 10:11:06.836499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:38.568 [2024-11-03 10:11:06.836508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:38.568 [2024-11-03 10:11:06.836523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:38.568 [2024-11-03 10:11:06.836530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:38.568 [2024-11-03 10:11:06.836545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:38.568 [2024-11-03 10:11:06.836551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:38.568 [2024-11-03 10:11:06.836565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:38.568 [2024-11-03 10:11:06.836573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:38.568 [2024-11-03 10:11:06.836586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:38.568 [2024-11-03 10:11:06.836591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:38.568 [2024-11-03 10:11:06.836599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:38.568 [2024-11-03 10:11:06.836605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:38.568 [2024-11-03 10:11:06.836613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:38.568 [2024-11-03 10:11:06.836619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:38.568 [2024-11-03 10:11:06.836631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:38.568 [2024-11-03 10:11:06.836638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836644] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:38.568 [2024-11-03 10:11:06.836656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:38.568 [2024-11-03 10:11:06.836663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:38.568 [2024-11-03 10:11:06.836670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.568 [2024-11-03 10:11:06.836677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:38.568 [2024-11-03 10:11:06.836686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:38.568 [2024-11-03 10:11:06.836692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:38.568 [2024-11-03 10:11:06.836700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:38.568 [2024-11-03 10:11:06.836705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:38.568 [2024-11-03 10:11:06.836713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:38.568 [2024-11-03 10:11:06.836722] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:38.568 [2024-11-03 10:11:06.836732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:38.568 [2024-11-03 10:11:06.836742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:38.568 [2024-11-03 10:11:06.836750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:38.568 [2024-11-03 10:11:06.836757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:38.568 [2024-11-03 10:11:06.836765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:38.568 [2024-11-03 10:11:06.836772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:38.568 [2024-11-03 10:11:06.836782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:38.568 [2024-11-03 10:11:06.836788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:38.568 [2024-11-03 10:11:06.836800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:38.568 [2024-11-03 10:11:06.836807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:38.568 [2024-11-03 10:11:06.836815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:38.568 [2024-11-03 10:11:06.836822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:38.568 [2024-11-03 10:11:06.836831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:38.568 [2024-11-03 10:11:06.836837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:38.568 [2024-11-03 10:11:06.836847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:38.568 [2024-11-03 10:11:06.836853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:38.568 [2024-11-03 10:11:06.836862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:38.568 [2024-11-03 10:11:06.836868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:38.568 [2024-11-03 10:11:06.836876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:38.568 [2024-11-03 10:11:06.836881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:38.568 [2024-11-03 10:11:06.836888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:38.568 [2024-11-03 10:11:06.836894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.568 [2024-11-03 10:11:06.836903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:38.568 [2024-11-03 10:11:06.836909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:16:38.568 [2024-11-03 10:11:06.836917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.568 [2024-11-03 10:11:06.836941] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:38.568 [2024-11-03 10:11:06.836949] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:42.769 [2024-11-03 10:11:10.658788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.769 [2024-11-03 10:11:10.658856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:42.769 [2024-11-03 10:11:10.658870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3821.832 ms 00:16:42.769 [2024-11-03 10:11:10.658884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.769 [2024-11-03 10:11:10.676621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.769 [2024-11-03 10:11:10.676803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.769 [2024-11-03 10:11:10.676819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.650 ms 00:16:42.769 [2024-11-03 10:11:10.676830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.769 [2024-11-03 10:11:10.676923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.769 [2024-11-03 10:11:10.676934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:42.769 [2024-11-03 10:11:10.676941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:42.769 [2024-11-03 10:11:10.676949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.769 [2024-11-03 10:11:10.687333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.769 [2024-11-03 10:11:10.687481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.769 [2024-11-03 10:11:10.687499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.354 ms 00:16:42.769 [2024-11-03 10:11:10.687516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.769 [2024-11-03 10:11:10.687550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.769 [2024-11-03 10:11:10.687562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.769 [2024-11-03 10:11:10.687571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:42.769 [2024-11-03 10:11:10.687581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.769 [2024-11-03 10:11:10.688009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.769 [2024-11-03 10:11:10.688031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.769 [2024-11-03 10:11:10.688043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:16:42.769 [2024-11-03 10:11:10.688062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.769 [2024-11-03 10:11:10.688198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.769 [2024-11-03 10:11:10.688215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.769 [2024-11-03 10:11:10.688245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:16:42.769 [2024-11-03 10:11:10.688260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.769 [2024-11-03 10:11:10.693923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.769 [2024-11-03 10:11:10.693952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.770 [2024-11-03 10:11:10.693960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.644 ms 00:16:42.770 [2024-11-03 10:11:10.693968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.701410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:42.770 [2024-11-03 10:11:10.706877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.706902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:42.770 [2024-11-03 10:11:10.706913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.860 ms 00:16:42.770 [2024-11-03 10:11:10.706922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.779512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.779545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:42.770 [2024-11-03 10:11:10.779558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.568 ms 00:16:42.770 [2024-11-03 10:11:10.779565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.779713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.779722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:42.770 [2024-11-03 10:11:10.779733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:42.770 [2024-11-03 10:11:10.779739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.783496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.783523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:42.770 [2024-11-03 10:11:10.783534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.732 ms 00:16:42.770 [2024-11-03 10:11:10.783540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.787009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.787035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:42.770 [2024-11-03 10:11:10.787045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.440 ms 00:16:42.770 [2024-11-03 10:11:10.787051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.787325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.787334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:42.770 [2024-11-03 10:11:10.787346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:16:42.770 [2024-11-03 10:11:10.787352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.819777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.819805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:42.770 [2024-11-03 10:11:10.819815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.400 ms 00:16:42.770 [2024-11-03 10:11:10.819822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.824770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.824801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:42.770 [2024-11-03 10:11:10.824812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.913 ms 00:16:42.770 [2024-11-03 10:11:10.824819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.828058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.828174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:42.770 [2024-11-03 10:11:10.828189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:16:42.770 [2024-11-03 10:11:10.828195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.832450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.832478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:42.770 [2024-11-03 10:11:10.832490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:16:42.770 [2024-11-03 10:11:10.832496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.832532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.832540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:42.770 [2024-11-03 10:11:10.832551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:42.770 [2024-11-03 10:11:10.832560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.832617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.770 [2024-11-03 10:11:10.832625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:42.770 [2024-11-03 10:11:10.832633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:42.770 [2024-11-03 10:11:10.832639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.770 [2024-11-03 10:11:10.833475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4007.650 ms, result 0 00:16:42.770 { 00:16:42.770 "name": "ftl0", 00:16:42.770 "uuid": "1288131a-1fe4-4816-8159-d8067d518b71" 00:16:42.770 } 00:16:42.770 10:11:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:42.770 10:11:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:42.770 10:11:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:42.770 10:11:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:43.028 [2024-11-03 10:11:11.138577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:43.028 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:43.028 Zero copy mechanism will not be used. 00:16:43.028 Running I/O for 4 seconds... 00:16:44.898 751.00 IOPS, 49.87 MiB/s [2024-11-03T10:11:14.195Z] 765.50 IOPS, 50.83 MiB/s [2024-11-03T10:11:15.571Z] 759.33 IOPS, 50.42 MiB/s [2024-11-03T10:11:15.571Z] 765.75 IOPS, 50.85 MiB/s 00:16:47.209 Latency(us) 00:16:47.209 [2024-11-03T10:11:15.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.209 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:47.209 ftl0 : 4.00 765.75 50.85 0.00 0.00 1386.25 450.56 2508.01 00:16:47.209 [2024-11-03T10:11:15.571Z] =================================================================================================================== 00:16:47.209 [2024-11-03T10:11:15.571Z] Total : 765.75 50.85 0.00 0.00 1386.25 450.56 2508.01 00:16:47.209 { 00:16:47.209 "results": [ 00:16:47.209 { 00:16:47.209 "job": "ftl0", 00:16:47.209 "core_mask": "0x1", 00:16:47.209 "workload": "randwrite", 00:16:47.209 "status": "finished", 00:16:47.209 "queue_depth": 1, 00:16:47.209 "io_size": 69632, 00:16:47.209 "runtime": 4.001295, 00:16:47.209 "iops": 765.7520877615872, 00:16:47.209 "mibps": 50.8507245779179, 00:16:47.209 "io_failed": 0, 00:16:47.209 "io_timeout": 0, 00:16:47.209 "avg_latency_us": 1386.2541393854187, 00:16:47.209 "min_latency_us": 450.56, 00:16:47.209 "max_latency_us": 2508.012307692308 00:16:47.209 } 00:16:47.209 ], 00:16:47.209 "core_count": 1 00:16:47.209 } 00:16:47.209 [2024-11-03 10:11:15.144507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:47.209 10:11:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:47.209 [2024-11-03 10:11:15.249086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:47.209 Running I/O for 4 seconds... 00:16:49.089 6833.00 IOPS, 26.69 MiB/s [2024-11-03T10:11:18.390Z] 5658.00 IOPS, 22.10 MiB/s [2024-11-03T10:11:19.333Z] 5314.00 IOPS, 20.76 MiB/s [2024-11-03T10:11:19.333Z] 5162.50 IOPS, 20.17 MiB/s 00:16:50.971 Latency(us) 00:16:50.971 [2024-11-03T10:11:19.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.971 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.971 ftl0 : 4.04 5148.68 20.11 0.00 0.00 24754.67 415.90 47185.92 00:16:50.971 [2024-11-03T10:11:19.333Z] =================================================================================================================== 00:16:50.971 [2024-11-03T10:11:19.333Z] Total : 5148.68 20.11 0.00 0.00 24754.67 0.00 47185.92 00:16:50.971 [2024-11-03 10:11:19.290842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:50.971 { 00:16:50.971 "results": [ 00:16:50.971 { 00:16:50.971 "job": "ftl0", 00:16:50.971 "core_mask": "0x1", 00:16:50.971 "workload": "randwrite", 00:16:50.971 "status": "finished", 00:16:50.971 "queue_depth": 128, 00:16:50.971 "io_size": 4096, 00:16:50.971 "runtime": 4.035597, 00:16:50.971 "iops": 5148.680604133663, 00:16:50.971 "mibps": 20.11203360989712, 00:16:50.971 "io_failed": 0, 00:16:50.971 "io_timeout": 0, 00:16:50.971 "avg_latency_us": 24754.6661904233, 00:16:50.971 "min_latency_us": 415.90153846153845, 00:16:50.971 "max_latency_us": 47185.92 00:16:50.971 } 00:16:50.971 ], 00:16:50.971 "core_count": 1 00:16:50.971 } 00:16:50.971 10:11:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:51.233 [2024-11-03 10:11:19.398211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:51.233 Running I/O for 4 seconds... 00:16:53.115 4205.00 IOPS, 16.43 MiB/s [2024-11-03T10:11:22.419Z] 4281.50 IOPS, 16.72 MiB/s [2024-11-03T10:11:23.803Z] 4318.33 IOPS, 16.87 MiB/s [2024-11-03T10:11:23.803Z] 4344.50 IOPS, 16.97 MiB/s 00:16:55.441 Latency(us) 00:16:55.441 [2024-11-03T10:11:23.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.441 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:55.441 Verification LBA range: start 0x0 length 0x1400000 00:16:55.441 ftl0 : 4.02 4359.18 17.03 0.00 0.00 29272.13 447.41 38918.30 00:16:55.441 [2024-11-03T10:11:23.803Z] =================================================================================================================== 00:16:55.441 [2024-11-03T10:11:23.803Z] Total : 4359.18 17.03 0.00 0.00 29272.13 0.00 38918.30 00:16:55.441 [2024-11-03 10:11:23.423053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:55.441 { 00:16:55.441 "results": [ 00:16:55.441 { 00:16:55.441 "job": "ftl0", 00:16:55.441 "core_mask": "0x1", 00:16:55.441 "workload": "verify", 00:16:55.441 "status": "finished", 00:16:55.441 "verify_range": { 00:16:55.441 "start": 0, 00:16:55.441 "length": 20971520 00:16:55.441 }, 00:16:55.441 "queue_depth": 128, 00:16:55.441 "io_size": 4096, 00:16:55.441 "runtime": 4.01589, 00:16:55.441 "iops": 4359.18314495666, 00:16:55.441 "mibps": 17.028059159986952, 00:16:55.441 "io_failed": 0, 00:16:55.441 "io_timeout": 0, 00:16:55.441 "avg_latency_us": 29272.12731775479, 00:16:55.441 "min_latency_us": 447.40923076923076, 00:16:55.441 "max_latency_us": 38918.301538461536 00:16:55.441 } 00:16:55.441 ], 00:16:55.441 "core_count": 1 00:16:55.441 } 00:16:55.441 10:11:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:55.441 [2024-11-03 10:11:23.641915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.441 [2024-11-03 10:11:23.641970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:55.441 [2024-11-03 10:11:23.641986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:55.441 [2024-11-03 10:11:23.642002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.441 [2024-11-03 10:11:23.642029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:55.441 [2024-11-03 10:11:23.642986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.441 [2024-11-03 10:11:23.643035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:55.441 [2024-11-03 10:11:23.643048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:16:55.441 [2024-11-03 10:11:23.643069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.441 [2024-11-03 10:11:23.646170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.441 [2024-11-03 10:11:23.646412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:55.441 [2024-11-03 10:11:23.646435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:16:55.441 [2024-11-03 10:11:23.646449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.884760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.884848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:55.738 [2024-11-03 10:11:23.884867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 238.282 ms 00:16:55.738 [2024-11-03 10:11:23.884878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.891186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.891252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:55.738 [2024-11-03 10:11:23.891265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.249 ms 00:16:55.738 [2024-11-03 10:11:23.891275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.894369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.894421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:55.738 [2024-11-03 10:11:23.894432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:16:55.738 [2024-11-03 10:11:23.894443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.900771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.900975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:55.738 [2024-11-03 10:11:23.900998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.284 ms 00:16:55.738 [2024-11-03 10:11:23.901021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.901147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.901169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:55.738 [2024-11-03 10:11:23.901179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:55.738 [2024-11-03 10:11:23.901189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.904374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.904426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:55.738 [2024-11-03 10:11:23.904437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.168 ms 00:16:55.738 [2024-11-03 10:11:23.904448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.907053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.907101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:55.738 [2024-11-03 10:11:23.907112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:16:55.738 [2024-11-03 10:11:23.907122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.909128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.909177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:55.738 [2024-11-03 10:11:23.909187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.964 ms 00:16:55.738 [2024-11-03 10:11:23.909200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.911015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.738 [2024-11-03 10:11:23.911181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:55.738 [2024-11-03 10:11:23.911198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:16:55.738 [2024-11-03 10:11:23.911207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.738 [2024-11-03 10:11:23.911348] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:55.738 [2024-11-03 10:11:23.911387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:55.738 [2024-11-03 10:11:23.911664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.911995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:55.739 [2024-11-03 10:11:23.912321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:55.739 [2024-11-03 10:11:23.912330] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1288131a-1fe4-4816-8159-d8067d518b71 00:16:55.739 [2024-11-03 10:11:23.912340] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:55.739 [2024-11-03 10:11:23.912350] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:55.739 [2024-11-03 10:11:23.912360] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:55.739 [2024-11-03 10:11:23.912375] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:55.739 [2024-11-03 10:11:23.912387] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:55.739 [2024-11-03 10:11:23.912395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:55.739 [2024-11-03 10:11:23.912411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:55.739 [2024-11-03 10:11:23.912417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:55.739 [2024-11-03 10:11:23.912425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:55.739 [2024-11-03 10:11:23.912433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.739 [2024-11-03 10:11:23.912443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:55.739 [2024-11-03 10:11:23.912452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:16:55.739 [2024-11-03 10:11:23.912461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.739 [2024-11-03 10:11:23.915206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.739 [2024-11-03 10:11:23.915357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:55.739 [2024-11-03 10:11:23.915423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.722 ms 00:16:55.739 [2024-11-03 10:11:23.915451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.739 [2024-11-03 10:11:23.915746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.739 [2024-11-03 10:11:23.915872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:55.739 [2024-11-03 10:11:23.915936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:55.739 [2024-11-03 10:11:23.915965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.739 [2024-11-03 10:11:23.922912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.739 [2024-11-03 10:11:23.923076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.739 [2024-11-03 10:11:23.923138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.923164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.923263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.923349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.740 [2024-11-03 10:11:23.923401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.923427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.923525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.923557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.740 [2024-11-03 10:11:23.923578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.923640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.923673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.923696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.740 [2024-11-03 10:11:23.924179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.924271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.938212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.938429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.740 [2024-11-03 10:11:23.938491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.938528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.950196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.950399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.740 [2024-11-03 10:11:23.950458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.950485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.950610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.950645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.740 [2024-11-03 10:11:23.950666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.950687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.950745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.950770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.740 [2024-11-03 10:11:23.950864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.950895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.951000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.951029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.740 [2024-11-03 10:11:23.951057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.951078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.951123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.951149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:55.740 [2024-11-03 10:11:23.951169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.951305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.951854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.951914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.740 [2024-11-03 10:11:23.952113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.952141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.952239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.740 [2024-11-03 10:11:23.952254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.740 [2024-11-03 10:11:23.952263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.740 [2024-11-03 10:11:23.952276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.740 [2024-11-03 10:11:23.952426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 310.470 ms, result 0 00:16:55.740 true 00:16:55.740 10:11:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84821 00:16:55.740 10:11:23 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84821 ']' 00:16:55.740 10:11:23 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84821 00:16:55.740 10:11:23 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:55.740 10:11:23 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.740 10:11:23 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84821 00:16:55.740 killing process with pid 84821 00:16:55.740 Received shutdown signal, test time was about 4.000000 seconds 00:16:55.740 00:16:55.740 Latency(us) 00:16:55.740 [2024-11-03T10:11:24.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.740 [2024-11-03T10:11:24.102Z] =================================================================================================================== 00:16:55.740 [2024-11-03T10:11:24.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.740 10:11:24 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.740 10:11:24 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.740 10:11:24 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84821' 00:16:55.740 10:11:24 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84821 00:16:55.740 10:11:24 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84821 00:16:56.027 Remove shared memory files 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:56.027 ************************************ 00:16:56.027 END TEST ftl_bdevperf 00:16:56.027 ************************************ 00:16:56.027 00:16:56.027 real 0m21.420s 00:16:56.027 user 0m24.104s 00:16:56.027 sys 0m0.832s 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:56.027 10:11:24 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:56.027 10:11:24 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:56.027 10:11:24 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:56.027 10:11:24 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:56.027 10:11:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:56.027 ************************************ 00:16:56.027 START TEST ftl_trim 00:16:56.027 ************************************ 00:16:56.027 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:56.288 * Looking for test storage... 00:16:56.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.288 10:11:24 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:56.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.288 --rc genhtml_branch_coverage=1 00:16:56.288 --rc genhtml_function_coverage=1 00:16:56.288 --rc genhtml_legend=1 00:16:56.288 --rc geninfo_all_blocks=1 00:16:56.288 --rc geninfo_unexecuted_blocks=1 00:16:56.288 00:16:56.288 ' 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:56.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.288 --rc genhtml_branch_coverage=1 00:16:56.288 --rc genhtml_function_coverage=1 00:16:56.288 --rc genhtml_legend=1 00:16:56.288 --rc geninfo_all_blocks=1 00:16:56.288 --rc geninfo_unexecuted_blocks=1 00:16:56.288 00:16:56.288 ' 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:56.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.288 --rc genhtml_branch_coverage=1 00:16:56.288 --rc genhtml_function_coverage=1 00:16:56.288 --rc genhtml_legend=1 00:16:56.288 --rc geninfo_all_blocks=1 00:16:56.288 --rc geninfo_unexecuted_blocks=1 00:16:56.288 00:16:56.288 ' 00:16:56.288 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:56.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.288 --rc genhtml_branch_coverage=1 00:16:56.288 --rc genhtml_function_coverage=1 00:16:56.288 --rc genhtml_legend=1 00:16:56.288 --rc geninfo_all_blocks=1 00:16:56.288 --rc geninfo_unexecuted_blocks=1 00:16:56.288 00:16:56.288 ' 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:56.288 10:11:24 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85167 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85167 00:16:56.289 10:11:24 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:56.289 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85167 ']' 00:16:56.289 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.289 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.289 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.289 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.289 10:11:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:56.289 [2024-11-03 10:11:24.631677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:56.289 [2024-11-03 10:11:24.632050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85167 ] 00:16:56.548 [2024-11-03 10:11:24.769539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.548 [2024-11-03 10:11:24.821801] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.548 [2024-11-03 10:11:24.822123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.548 [2024-11-03 10:11:24.822164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.489 10:11:25 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.489 10:11:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:57.490 10:11:25 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:57.490 10:11:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:57.490 10:11:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:57.490 10:11:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:57.490 10:11:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:57.490 10:11:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:57.748 10:11:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:57.748 { 00:16:57.748 "name": "nvme0n1", 00:16:57.748 "aliases": [ 00:16:57.748 "7caaa619-2f79-4bb1-8a82-e8e1f585910f" 00:16:57.748 ], 00:16:57.748 "product_name": "NVMe disk", 00:16:57.748 "block_size": 4096, 00:16:57.748 "num_blocks": 1310720, 00:16:57.748 "uuid": "7caaa619-2f79-4bb1-8a82-e8e1f585910f", 00:16:57.748 "numa_id": -1, 00:16:57.748 "assigned_rate_limits": { 00:16:57.748 "rw_ios_per_sec": 0, 00:16:57.748 "rw_mbytes_per_sec": 0, 00:16:57.748 "r_mbytes_per_sec": 0, 00:16:57.748 "w_mbytes_per_sec": 0 00:16:57.748 }, 00:16:57.748 "claimed": true, 00:16:57.748 "claim_type": "read_many_write_one", 00:16:57.748 "zoned": false, 00:16:57.748 "supported_io_types": { 00:16:57.748 "read": true, 00:16:57.748 "write": true, 00:16:57.748 "unmap": true, 00:16:57.748 "flush": true, 00:16:57.748 "reset": true, 00:16:57.748 "nvme_admin": true, 00:16:57.748 "nvme_io": true, 00:16:57.748 "nvme_io_md": false, 00:16:57.748 "write_zeroes": true, 00:16:57.748 "zcopy": false, 00:16:57.748 "get_zone_info": false, 00:16:57.748 "zone_management": false, 00:16:57.748 "zone_append": false, 00:16:57.748 "compare": true, 00:16:57.748 "compare_and_write": false, 00:16:57.748 "abort": true, 00:16:57.748 "seek_hole": false, 00:16:57.748 "seek_data": false, 00:16:57.748 "copy": true, 00:16:57.748 "nvme_iov_md": false 00:16:57.748 }, 00:16:57.748 "driver_specific": { 00:16:57.748 "nvme": [ 00:16:57.748 { 00:16:57.748 "pci_address": "0000:00:11.0", 00:16:57.748 "trid": { 00:16:57.748 "trtype": "PCIe", 00:16:57.748 "traddr": "0000:00:11.0" 00:16:57.748 }, 00:16:57.748 "ctrlr_data": { 00:16:57.748 "cntlid": 0, 00:16:57.748 "vendor_id": "0x1b36", 00:16:57.748 "model_number": "QEMU NVMe Ctrl", 00:16:57.748 "serial_number": "12341", 00:16:57.748 "firmware_revision": "8.0.0", 00:16:57.748 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:57.748 "oacs": { 00:16:57.748 "security": 0, 00:16:57.748 "format": 1, 00:16:57.748 "firmware": 0, 00:16:57.748 "ns_manage": 1 00:16:57.748 }, 00:16:57.748 "multi_ctrlr": false, 00:16:57.748 "ana_reporting": false 00:16:57.748 }, 00:16:57.748 "vs": { 00:16:57.748 "nvme_version": "1.4" 00:16:57.748 }, 00:16:57.748 "ns_data": { 00:16:57.748 "id": 1, 00:16:57.748 "can_share": false 00:16:57.748 } 00:16:57.748 } 00:16:57.748 ], 00:16:57.748 "mp_policy": "active_passive" 00:16:57.748 } 00:16:57.748 } 00:16:57.748 ]' 00:16:57.748 10:11:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:57.748 10:11:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:57.748 10:11:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:57.748 10:11:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:57.748 10:11:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:57.748 10:11:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:57.748 10:11:26 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:57.748 10:11:26 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:57.748 10:11:26 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:57.749 10:11:26 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:57.749 10:11:26 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:58.008 10:11:26 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=bdd6dbcf-a177-4ee0-aff1-a4ea6943180b 00:16:58.008 10:11:26 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:58.008 10:11:26 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bdd6dbcf-a177-4ee0-aff1-a4ea6943180b 00:16:58.267 10:11:26 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:58.526 10:11:26 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=187df106-2bef-4c42-b073-f97c92e02c6b 00:16:58.526 10:11:26 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 187df106-2bef-4c42-b073-f97c92e02c6b 00:16:58.786 10:11:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e946d12b-8105-478b-a92f-39d27a096e36 00:16:58.786 10:11:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e946d12b-8105-478b-a92f-39d27a096e36 00:16:58.786 10:11:27 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:58.786 10:11:27 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:58.786 10:11:27 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e946d12b-8105-478b-a92f-39d27a096e36 00:16:58.786 10:11:27 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:58.786 10:11:27 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e946d12b-8105-478b-a92f-39d27a096e36 00:16:58.786 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e946d12b-8105-478b-a92f-39d27a096e36 00:16:58.786 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:58.786 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:58.786 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:58.787 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e946d12b-8105-478b-a92f-39d27a096e36 00:16:59.046 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:59.046 { 00:16:59.046 "name": "e946d12b-8105-478b-a92f-39d27a096e36", 00:16:59.046 "aliases": [ 00:16:59.046 "lvs/nvme0n1p0" 00:16:59.046 ], 00:16:59.046 "product_name": "Logical Volume", 00:16:59.046 "block_size": 4096, 00:16:59.046 "num_blocks": 26476544, 00:16:59.046 "uuid": "e946d12b-8105-478b-a92f-39d27a096e36", 00:16:59.046 "assigned_rate_limits": { 00:16:59.046 "rw_ios_per_sec": 0, 00:16:59.046 "rw_mbytes_per_sec": 0, 00:16:59.046 "r_mbytes_per_sec": 0, 00:16:59.046 "w_mbytes_per_sec": 0 00:16:59.046 }, 00:16:59.046 "claimed": false, 00:16:59.046 "zoned": false, 00:16:59.046 "supported_io_types": { 00:16:59.046 "read": true, 00:16:59.046 "write": true, 00:16:59.046 "unmap": true, 00:16:59.046 "flush": false, 00:16:59.046 "reset": true, 00:16:59.046 "nvme_admin": false, 00:16:59.046 "nvme_io": false, 00:16:59.046 "nvme_io_md": false, 00:16:59.046 "write_zeroes": true, 00:16:59.046 "zcopy": false, 00:16:59.046 "get_zone_info": false, 00:16:59.046 "zone_management": false, 00:16:59.046 "zone_append": false, 00:16:59.046 "compare": false, 00:16:59.046 "compare_and_write": false, 00:16:59.046 "abort": false, 00:16:59.046 "seek_hole": true, 00:16:59.046 "seek_data": true, 00:16:59.046 "copy": false, 00:16:59.046 "nvme_iov_md": false 00:16:59.046 }, 00:16:59.046 "driver_specific": { 00:16:59.046 "lvol": { 00:16:59.046 "lvol_store_uuid": "187df106-2bef-4c42-b073-f97c92e02c6b", 00:16:59.046 "base_bdev": "nvme0n1", 00:16:59.046 "thin_provision": true, 00:16:59.046 "num_allocated_clusters": 0, 00:16:59.046 "snapshot": false, 00:16:59.046 "clone": false, 00:16:59.046 "esnap_clone": false 00:16:59.046 } 00:16:59.046 } 00:16:59.046 } 00:16:59.046 ]' 00:16:59.046 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:59.046 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:59.046 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:59.046 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:59.046 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:59.046 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:59.046 10:11:27 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:59.046 10:11:27 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:59.046 10:11:27 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:59.307 10:11:27 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:59.307 10:11:27 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:59.307 10:11:27 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e946d12b-8105-478b-a92f-39d27a096e36 00:16:59.307 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e946d12b-8105-478b-a92f-39d27a096e36 00:16:59.307 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:59.307 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:59.307 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:59.307 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e946d12b-8105-478b-a92f-39d27a096e36 00:16:59.567 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:59.567 { 00:16:59.567 "name": "e946d12b-8105-478b-a92f-39d27a096e36", 00:16:59.567 "aliases": [ 00:16:59.567 "lvs/nvme0n1p0" 00:16:59.567 ], 00:16:59.567 "product_name": "Logical Volume", 00:16:59.567 "block_size": 4096, 00:16:59.567 "num_blocks": 26476544, 00:16:59.567 "uuid": "e946d12b-8105-478b-a92f-39d27a096e36", 00:16:59.567 "assigned_rate_limits": { 00:16:59.567 "rw_ios_per_sec": 0, 00:16:59.567 "rw_mbytes_per_sec": 0, 00:16:59.567 "r_mbytes_per_sec": 0, 00:16:59.567 "w_mbytes_per_sec": 0 00:16:59.567 }, 00:16:59.567 "claimed": false, 00:16:59.567 "zoned": false, 00:16:59.567 "supported_io_types": { 00:16:59.567 "read": true, 00:16:59.567 "write": true, 00:16:59.567 "unmap": true, 00:16:59.567 "flush": false, 00:16:59.567 "reset": true, 00:16:59.567 "nvme_admin": false, 00:16:59.567 "nvme_io": false, 00:16:59.567 "nvme_io_md": false, 00:16:59.567 "write_zeroes": true, 00:16:59.567 "zcopy": false, 00:16:59.567 "get_zone_info": false, 00:16:59.567 "zone_management": false, 00:16:59.567 "zone_append": false, 00:16:59.567 "compare": false, 00:16:59.567 "compare_and_write": false, 00:16:59.567 "abort": false, 00:16:59.567 "seek_hole": true, 00:16:59.567 "seek_data": true, 00:16:59.567 "copy": false, 00:16:59.567 "nvme_iov_md": false 00:16:59.567 }, 00:16:59.567 "driver_specific": { 00:16:59.567 "lvol": { 00:16:59.567 "lvol_store_uuid": "187df106-2bef-4c42-b073-f97c92e02c6b", 00:16:59.567 "base_bdev": "nvme0n1", 00:16:59.567 "thin_provision": true, 00:16:59.567 "num_allocated_clusters": 0, 00:16:59.567 "snapshot": false, 00:16:59.567 "clone": false, 00:16:59.567 "esnap_clone": false 00:16:59.567 } 00:16:59.567 } 00:16:59.567 } 00:16:59.567 ]' 00:16:59.567 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:59.567 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:59.567 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:59.567 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:59.567 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:59.567 10:11:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:59.567 10:11:27 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:59.567 10:11:27 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:59.828 10:11:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:59.828 10:11:28 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:59.828 10:11:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e946d12b-8105-478b-a92f-39d27a096e36 00:16:59.828 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e946d12b-8105-478b-a92f-39d27a096e36 00:16:59.828 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:59.828 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:59.828 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:59.828 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e946d12b-8105-478b-a92f-39d27a096e36 00:17:00.089 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:00.089 { 00:17:00.089 "name": "e946d12b-8105-478b-a92f-39d27a096e36", 00:17:00.089 "aliases": [ 00:17:00.089 "lvs/nvme0n1p0" 00:17:00.089 ], 00:17:00.089 "product_name": "Logical Volume", 00:17:00.089 "block_size": 4096, 00:17:00.089 "num_blocks": 26476544, 00:17:00.089 "uuid": "e946d12b-8105-478b-a92f-39d27a096e36", 00:17:00.089 "assigned_rate_limits": { 00:17:00.089 "rw_ios_per_sec": 0, 00:17:00.089 "rw_mbytes_per_sec": 0, 00:17:00.089 "r_mbytes_per_sec": 0, 00:17:00.089 "w_mbytes_per_sec": 0 00:17:00.089 }, 00:17:00.089 "claimed": false, 00:17:00.089 "zoned": false, 00:17:00.089 "supported_io_types": { 00:17:00.089 "read": true, 00:17:00.089 "write": true, 00:17:00.089 "unmap": true, 00:17:00.089 "flush": false, 00:17:00.089 "reset": true, 00:17:00.089 "nvme_admin": false, 00:17:00.089 "nvme_io": false, 00:17:00.089 "nvme_io_md": false, 00:17:00.089 "write_zeroes": true, 00:17:00.089 "zcopy": false, 00:17:00.089 "get_zone_info": false, 00:17:00.089 "zone_management": false, 00:17:00.089 "zone_append": false, 00:17:00.089 "compare": false, 00:17:00.089 "compare_and_write": false, 00:17:00.089 "abort": false, 00:17:00.089 "seek_hole": true, 00:17:00.089 "seek_data": true, 00:17:00.089 "copy": false, 00:17:00.089 "nvme_iov_md": false 00:17:00.089 }, 00:17:00.089 "driver_specific": { 00:17:00.089 "lvol": { 00:17:00.089 "lvol_store_uuid": "187df106-2bef-4c42-b073-f97c92e02c6b", 00:17:00.089 "base_bdev": "nvme0n1", 00:17:00.089 "thin_provision": true, 00:17:00.089 "num_allocated_clusters": 0, 00:17:00.089 "snapshot": false, 00:17:00.089 "clone": false, 00:17:00.089 "esnap_clone": false 00:17:00.089 } 00:17:00.089 } 00:17:00.089 } 00:17:00.089 ]' 00:17:00.089 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:00.089 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:00.089 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:00.089 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:00.089 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:00.089 10:11:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:00.089 10:11:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:00.089 10:11:28 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e946d12b-8105-478b-a92f-39d27a096e36 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:00.351 [2024-11-03 10:11:28.561353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.351 [2024-11-03 10:11:28.561413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:00.351 [2024-11-03 10:11:28.561427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:00.351 [2024-11-03 10:11:28.561438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.351 [2024-11-03 10:11:28.564005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.351 [2024-11-03 10:11:28.564180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.351 [2024-11-03 10:11:28.564197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.539 ms 00:17:00.351 [2024-11-03 10:11:28.564210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.351 [2024-11-03 10:11:28.564330] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:00.351 [2024-11-03 10:11:28.564796] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:00.351 [2024-11-03 10:11:28.564827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.351 [2024-11-03 10:11:28.564854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.351 [2024-11-03 10:11:28.564864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:17:00.351 [2024-11-03 10:11:28.564874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.351 [2024-11-03 10:11:28.565244] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4dcf88cb-9c36-4fbc-b806-2f618a8df3cb 00:17:00.351 [2024-11-03 10:11:28.566607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.351 [2024-11-03 10:11:28.566643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:00.351 [2024-11-03 10:11:28.566656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:00.351 [2024-11-03 10:11:28.566664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.351 [2024-11-03 10:11:28.573758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.351 [2024-11-03 10:11:28.573789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.351 [2024-11-03 10:11:28.573800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:17:00.351 [2024-11-03 10:11:28.573820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.351 [2024-11-03 10:11:28.573940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.351 [2024-11-03 10:11:28.573952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.351 [2024-11-03 10:11:28.573974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:00.351 [2024-11-03 10:11:28.573981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.351 [2024-11-03 10:11:28.574024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.351 [2024-11-03 10:11:28.574046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:00.351 [2024-11-03 10:11:28.574057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:00.351 [2024-11-03 10:11:28.574073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.352 [2024-11-03 10:11:28.574115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:00.352 [2024-11-03 10:11:28.575885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.352 [2024-11-03 10:11:28.575914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.352 [2024-11-03 10:11:28.575926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:17:00.352 [2024-11-03 10:11:28.575936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.352 [2024-11-03 10:11:28.575997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.352 [2024-11-03 10:11:28.576022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:00.352 [2024-11-03 10:11:28.576030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:00.352 [2024-11-03 10:11:28.576041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.352 [2024-11-03 10:11:28.576075] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:00.352 [2024-11-03 10:11:28.576254] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:00.352 [2024-11-03 10:11:28.576270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:00.352 [2024-11-03 10:11:28.576283] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:00.352 [2024-11-03 10:11:28.576295] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576305] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576313] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:00.352 [2024-11-03 10:11:28.576335] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:00.352 [2024-11-03 10:11:28.576342] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:00.352 [2024-11-03 10:11:28.576352] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:00.352 [2024-11-03 10:11:28.576360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.352 [2024-11-03 10:11:28.576368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:00.352 [2024-11-03 10:11:28.576376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:00.352 [2024-11-03 10:11:28.576387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.352 [2024-11-03 10:11:28.576486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.352 [2024-11-03 10:11:28.576498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:00.352 [2024-11-03 10:11:28.576505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:00.352 [2024-11-03 10:11:28.576514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.352 [2024-11-03 10:11:28.576639] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:00.352 [2024-11-03 10:11:28.576656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:00.352 [2024-11-03 10:11:28.576666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:00.352 [2024-11-03 10:11:28.576696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:00.352 [2024-11-03 10:11:28.576722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.352 [2024-11-03 10:11:28.576739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:00.352 [2024-11-03 10:11:28.576750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:00.352 [2024-11-03 10:11:28.576757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.352 [2024-11-03 10:11:28.576769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:00.352 [2024-11-03 10:11:28.576776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:00.352 [2024-11-03 10:11:28.576787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:00.352 [2024-11-03 10:11:28.576805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:00.352 [2024-11-03 10:11:28.576830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:00.352 [2024-11-03 10:11:28.576856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:00.352 [2024-11-03 10:11:28.576880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:00.352 [2024-11-03 10:11:28.576919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.352 [2024-11-03 10:11:28.576933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:00.352 [2024-11-03 10:11:28.576939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.352 [2024-11-03 10:11:28.576953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:00.352 [2024-11-03 10:11:28.576963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:00.352 [2024-11-03 10:11:28.576969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.352 [2024-11-03 10:11:28.576978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:00.352 [2024-11-03 10:11:28.576984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:00.352 [2024-11-03 10:11:28.576992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.352 [2024-11-03 10:11:28.576998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:00.352 [2024-11-03 10:11:28.577006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:00.352 [2024-11-03 10:11:28.577012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.352 [2024-11-03 10:11:28.577020] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:00.352 [2024-11-03 10:11:28.577028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:00.352 [2024-11-03 10:11:28.577039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.352 [2024-11-03 10:11:28.577046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.352 [2024-11-03 10:11:28.577057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:00.352 [2024-11-03 10:11:28.577065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:00.352 [2024-11-03 10:11:28.577073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:00.352 [2024-11-03 10:11:28.577080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:00.352 [2024-11-03 10:11:28.577088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:00.352 [2024-11-03 10:11:28.577095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:00.352 [2024-11-03 10:11:28.577106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:00.352 [2024-11-03 10:11:28.577115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.352 [2024-11-03 10:11:28.577125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:00.352 [2024-11-03 10:11:28.577132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:00.352 [2024-11-03 10:11:28.577141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:00.352 [2024-11-03 10:11:28.577148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:00.352 [2024-11-03 10:11:28.577156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:00.352 [2024-11-03 10:11:28.577164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:00.352 [2024-11-03 10:11:28.577175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:00.353 [2024-11-03 10:11:28.577183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:00.353 [2024-11-03 10:11:28.577191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:00.353 [2024-11-03 10:11:28.577198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:00.353 [2024-11-03 10:11:28.577207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:00.353 [2024-11-03 10:11:28.577214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:00.353 [2024-11-03 10:11:28.577235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:00.353 [2024-11-03 10:11:28.577243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:00.353 [2024-11-03 10:11:28.577252] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:00.353 [2024-11-03 10:11:28.577262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.353 [2024-11-03 10:11:28.577271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:00.353 [2024-11-03 10:11:28.577278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:00.353 [2024-11-03 10:11:28.577287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:00.353 [2024-11-03 10:11:28.577295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:00.353 [2024-11-03 10:11:28.577305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.353 [2024-11-03 10:11:28.577312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:00.353 [2024-11-03 10:11:28.577335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:17:00.353 [2024-11-03 10:11:28.577342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.353 [2024-11-03 10:11:28.577419] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:00.353 [2024-11-03 10:11:28.577434] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:02.890 [2024-11-03 10:11:31.172687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.890 [2024-11-03 10:11:31.172756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:02.890 [2024-11-03 10:11:31.172774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2595.253 ms 00:17:02.890 [2024-11-03 10:11:31.172783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.890 [2024-11-03 10:11:31.204151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.890 [2024-11-03 10:11:31.204273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.890 [2024-11-03 10:11:31.204313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.227 ms 00:17:02.890 [2024-11-03 10:11:31.204336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.890 [2024-11-03 10:11:31.204705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.890 [2024-11-03 10:11:31.205026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:02.890 [2024-11-03 10:11:31.205072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:17:02.890 [2024-11-03 10:11:31.205093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.890 [2024-11-03 10:11:31.217697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.890 [2024-11-03 10:11:31.217735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.890 [2024-11-03 10:11:31.217747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.525 ms 00:17:02.890 [2024-11-03 10:11:31.217756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.890 [2024-11-03 10:11:31.217826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.890 [2024-11-03 10:11:31.217850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.890 [2024-11-03 10:11:31.217861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:02.890 [2024-11-03 10:11:31.217869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.890 [2024-11-03 10:11:31.218315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.890 [2024-11-03 10:11:31.218338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.890 [2024-11-03 10:11:31.218351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:02.890 [2024-11-03 10:11:31.218360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.890 [2024-11-03 10:11:31.218505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.890 [2024-11-03 10:11:31.218524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.890 [2024-11-03 10:11:31.218535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:02.890 [2024-11-03 10:11:31.218544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.890 [2024-11-03 10:11:31.225829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.890 [2024-11-03 10:11:31.225875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.890 [2024-11-03 10:11:31.225887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.248 ms 00:17:02.890 [2024-11-03 10:11:31.225905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.890 [2024-11-03 10:11:31.234961] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:03.152 [2024-11-03 10:11:31.252581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.252620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:03.152 [2024-11-03 10:11:31.252631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.572 ms 00:17:03.152 [2024-11-03 10:11:31.252641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.316106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.316155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:03.152 [2024-11-03 10:11:31.316168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.374 ms 00:17:03.152 [2024-11-03 10:11:31.316182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.316392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.316407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:03.152 [2024-11-03 10:11:31.316421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:17:03.152 [2024-11-03 10:11:31.316431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.319303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.319339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:03.152 [2024-11-03 10:11:31.319349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:17:03.152 [2024-11-03 10:11:31.319359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.322023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.322057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:03.152 [2024-11-03 10:11:31.322067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:17:03.152 [2024-11-03 10:11:31.322077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.322431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.322450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:03.152 [2024-11-03 10:11:31.322461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:03.152 [2024-11-03 10:11:31.322473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.352638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.352674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:03.152 [2024-11-03 10:11:31.352685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.120 ms 00:17:03.152 [2024-11-03 10:11:31.352696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.357043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.357087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:03.152 [2024-11-03 10:11:31.357100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.255 ms 00:17:03.152 [2024-11-03 10:11:31.357112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.360150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.360182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:03.152 [2024-11-03 10:11:31.360191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.996 ms 00:17:03.152 [2024-11-03 10:11:31.360201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.363694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.363729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:03.152 [2024-11-03 10:11:31.363740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:17:03.152 [2024-11-03 10:11:31.363752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.363816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.363830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:03.152 [2024-11-03 10:11:31.363839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:03.152 [2024-11-03 10:11:31.363852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.363927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.152 [2024-11-03 10:11:31.363938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:03.152 [2024-11-03 10:11:31.363946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:03.152 [2024-11-03 10:11:31.363955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.152 [2024-11-03 10:11:31.364875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.152 [2024-11-03 10:11:31.365845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2803.221 ms, result 0 00:17:03.152 [2024-11-03 10:11:31.366886] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:03.152 { 00:17:03.152 "name": "ftl0", 00:17:03.152 "uuid": "4dcf88cb-9c36-4fbc-b806-2f618a8df3cb" 00:17:03.152 } 00:17:03.152 10:11:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:03.152 10:11:31 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:03.152 10:11:31 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.152 10:11:31 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:03.152 10:11:31 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.152 10:11:31 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.152 10:11:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:03.412 10:11:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:03.673 [ 00:17:03.673 { 00:17:03.673 "name": "ftl0", 00:17:03.673 "aliases": [ 00:17:03.673 "4dcf88cb-9c36-4fbc-b806-2f618a8df3cb" 00:17:03.673 ], 00:17:03.673 "product_name": "FTL disk", 00:17:03.673 "block_size": 4096, 00:17:03.673 "num_blocks": 23592960, 00:17:03.673 "uuid": "4dcf88cb-9c36-4fbc-b806-2f618a8df3cb", 00:17:03.673 "assigned_rate_limits": { 00:17:03.673 "rw_ios_per_sec": 0, 00:17:03.673 "rw_mbytes_per_sec": 0, 00:17:03.673 "r_mbytes_per_sec": 0, 00:17:03.673 "w_mbytes_per_sec": 0 00:17:03.673 }, 00:17:03.673 "claimed": false, 00:17:03.673 "zoned": false, 00:17:03.673 "supported_io_types": { 00:17:03.673 "read": true, 00:17:03.673 "write": true, 00:17:03.673 "unmap": true, 00:17:03.673 "flush": true, 00:17:03.673 "reset": false, 00:17:03.673 "nvme_admin": false, 00:17:03.673 "nvme_io": false, 00:17:03.673 "nvme_io_md": false, 00:17:03.673 "write_zeroes": true, 00:17:03.673 "zcopy": false, 00:17:03.673 "get_zone_info": false, 00:17:03.673 "zone_management": false, 00:17:03.673 "zone_append": false, 00:17:03.673 "compare": false, 00:17:03.673 "compare_and_write": false, 00:17:03.673 "abort": false, 00:17:03.673 "seek_hole": false, 00:17:03.673 "seek_data": false, 00:17:03.673 "copy": false, 00:17:03.673 "nvme_iov_md": false 00:17:03.673 }, 00:17:03.673 "driver_specific": { 00:17:03.673 "ftl": { 00:17:03.673 "base_bdev": "e946d12b-8105-478b-a92f-39d27a096e36", 00:17:03.673 "cache": "nvc0n1p0" 00:17:03.673 } 00:17:03.673 } 00:17:03.673 } 00:17:03.673 ] 00:17:03.673 10:11:31 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:03.673 10:11:31 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:03.673 10:11:31 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:03.673 10:11:31 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:03.673 10:11:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:03.934 10:11:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:03.934 { 00:17:03.934 "name": "ftl0", 00:17:03.934 "aliases": [ 00:17:03.934 "4dcf88cb-9c36-4fbc-b806-2f618a8df3cb" 00:17:03.934 ], 00:17:03.934 "product_name": "FTL disk", 00:17:03.934 "block_size": 4096, 00:17:03.934 "num_blocks": 23592960, 00:17:03.934 "uuid": "4dcf88cb-9c36-4fbc-b806-2f618a8df3cb", 00:17:03.934 "assigned_rate_limits": { 00:17:03.934 "rw_ios_per_sec": 0, 00:17:03.934 "rw_mbytes_per_sec": 0, 00:17:03.934 "r_mbytes_per_sec": 0, 00:17:03.934 "w_mbytes_per_sec": 0 00:17:03.934 }, 00:17:03.934 "claimed": false, 00:17:03.934 "zoned": false, 00:17:03.934 "supported_io_types": { 00:17:03.934 "read": true, 00:17:03.934 "write": true, 00:17:03.934 "unmap": true, 00:17:03.934 "flush": true, 00:17:03.934 "reset": false, 00:17:03.934 "nvme_admin": false, 00:17:03.934 "nvme_io": false, 00:17:03.934 "nvme_io_md": false, 00:17:03.934 "write_zeroes": true, 00:17:03.934 "zcopy": false, 00:17:03.934 "get_zone_info": false, 00:17:03.934 "zone_management": false, 00:17:03.934 "zone_append": false, 00:17:03.934 "compare": false, 00:17:03.934 "compare_and_write": false, 00:17:03.934 "abort": false, 00:17:03.934 "seek_hole": false, 00:17:03.934 "seek_data": false, 00:17:03.934 "copy": false, 00:17:03.934 "nvme_iov_md": false 00:17:03.934 }, 00:17:03.934 "driver_specific": { 00:17:03.934 "ftl": { 00:17:03.934 "base_bdev": "e946d12b-8105-478b-a92f-39d27a096e36", 00:17:03.934 "cache": "nvc0n1p0" 00:17:03.934 } 00:17:03.934 } 00:17:03.934 } 00:17:03.934 ]' 00:17:03.934 10:11:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:03.934 10:11:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:03.934 10:11:32 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:04.197 [2024-11-03 10:11:32.399146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.399185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:04.197 [2024-11-03 10:11:32.399199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:04.197 [2024-11-03 10:11:32.399207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.399265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:04.197 [2024-11-03 10:11:32.399808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.399831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:04.197 [2024-11-03 10:11:32.399841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:17:04.197 [2024-11-03 10:11:32.399876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.400449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.400464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:04.197 [2024-11-03 10:11:32.400473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:17:04.197 [2024-11-03 10:11:32.400485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.404152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.404286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:04.197 [2024-11-03 10:11:32.404301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:17:04.197 [2024-11-03 10:11:32.404321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.411218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.411257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:04.197 [2024-11-03 10:11:32.411267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.846 ms 00:17:04.197 [2024-11-03 10:11:32.411290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.413490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.413524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:04.197 [2024-11-03 10:11:32.413533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:17:04.197 [2024-11-03 10:11:32.413544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.418968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.419104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:04.197 [2024-11-03 10:11:32.419120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.374 ms 00:17:04.197 [2024-11-03 10:11:32.419140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.419353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.419367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:04.197 [2024-11-03 10:11:32.419375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:04.197 [2024-11-03 10:11:32.419389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.421776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.421810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:04.197 [2024-11-03 10:11:32.421818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.346 ms 00:17:04.197 [2024-11-03 10:11:32.421830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.423792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.423825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:04.197 [2024-11-03 10:11:32.423834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.919 ms 00:17:04.197 [2024-11-03 10:11:32.423846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.425281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.425386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:04.197 [2024-11-03 10:11:32.425399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:17:04.197 [2024-11-03 10:11:32.425408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.426869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.197 [2024-11-03 10:11:32.426903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:04.197 [2024-11-03 10:11:32.426911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:17:04.197 [2024-11-03 10:11:32.426921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.197 [2024-11-03 10:11:32.426967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:04.197 [2024-11-03 10:11:32.426983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.426993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:04.197 [2024-11-03 10:11:32.427287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:04.198 [2024-11-03 10:11:32.427892] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:04.198 [2024-11-03 10:11:32.427900] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4dcf88cb-9c36-4fbc-b806-2f618a8df3cb 00:17:04.198 [2024-11-03 10:11:32.427910] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:04.198 [2024-11-03 10:11:32.427917] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:04.198 [2024-11-03 10:11:32.427926] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:04.198 [2024-11-03 10:11:32.427934] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:04.198 [2024-11-03 10:11:32.427943] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:04.198 [2024-11-03 10:11:32.427950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:04.198 [2024-11-03 10:11:32.427961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:04.198 [2024-11-03 10:11:32.427968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:04.198 [2024-11-03 10:11:32.427976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:04.198 [2024-11-03 10:11:32.427983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.198 [2024-11-03 10:11:32.427992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:04.198 [2024-11-03 10:11:32.427999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:17:04.198 [2024-11-03 10:11:32.428011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.198 [2024-11-03 10:11:32.430036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.198 [2024-11-03 10:11:32.430138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:04.198 [2024-11-03 10:11:32.430189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.981 ms 00:17:04.198 [2024-11-03 10:11:32.430250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.198 [2024-11-03 10:11:32.430363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.198 [2024-11-03 10:11:32.430422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:04.198 [2024-11-03 10:11:32.430459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:04.198 [2024-11-03 10:11:32.430514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.198 [2024-11-03 10:11:32.437048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.198 [2024-11-03 10:11:32.437152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.198 [2024-11-03 10:11:32.437201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.198 [2024-11-03 10:11:32.437242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.198 [2024-11-03 10:11:32.437379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.198 [2024-11-03 10:11:32.437436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.198 [2024-11-03 10:11:32.437478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.437507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.437599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.437651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.199 [2024-11-03 10:11:32.437700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.437726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.437803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.437830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.199 [2024-11-03 10:11:32.437849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.437894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.449883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.450020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.199 [2024-11-03 10:11:32.450070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.450119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.459832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.459964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.199 [2024-11-03 10:11:32.460016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.460043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.460166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.460486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.199 [2024-11-03 10:11:32.460560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.460585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.460761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.460843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.199 [2024-11-03 10:11:32.460888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.460912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.461020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.461048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.199 [2024-11-03 10:11:32.461095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.461120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.461189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.461256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:04.199 [2024-11-03 10:11:32.461277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.461330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.461398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.461461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.199 [2024-11-03 10:11:32.461485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.461525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.461602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.199 [2024-11-03 10:11:32.461633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.199 [2024-11-03 10:11:32.461862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.199 [2024-11-03 10:11:32.461985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.199 [2024-11-03 10:11:32.462186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.013 ms, result 0 00:17:04.199 true 00:17:04.199 10:11:32 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85167 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85167 ']' 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85167 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85167 00:17:04.199 killing process with pid 85167 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85167' 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85167 00:17:04.199 10:11:32 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85167 00:17:09.477 10:11:37 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:09.736 65536+0 records in 00:17:09.736 65536+0 records out 00:17:09.736 268435456 bytes (268 MB, 256 MiB) copied, 0.820195 s, 327 MB/s 00:17:09.736 10:11:38 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:09.996 [2024-11-03 10:11:38.153561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:09.996 [2024-11-03 10:11:38.153860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85333 ] 00:17:09.996 [2024-11-03 10:11:38.289066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.996 [2024-11-03 10:11:38.331713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.256 [2024-11-03 10:11:38.435673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.256 [2024-11-03 10:11:38.435739] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.256 [2024-11-03 10:11:38.600309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.256 [2024-11-03 10:11:38.600363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:10.256 [2024-11-03 10:11:38.600376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:10.256 [2024-11-03 10:11:38.600384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.256 [2024-11-03 10:11:38.602766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.257 [2024-11-03 10:11:38.602929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.257 [2024-11-03 10:11:38.602951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:17:10.257 [2024-11-03 10:11:38.602959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.257 [2024-11-03 10:11:38.603373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:10.257 [2024-11-03 10:11:38.603672] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:10.257 [2024-11-03 10:11:38.603696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.257 [2024-11-03 10:11:38.603709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.257 [2024-11-03 10:11:38.603722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:17:10.257 [2024-11-03 10:11:38.603733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.257 [2024-11-03 10:11:38.605568] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:10.257 [2024-11-03 10:11:38.608844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.257 [2024-11-03 10:11:38.608888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:10.257 [2024-11-03 10:11:38.608909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:17:10.257 [2024-11-03 10:11:38.608919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.257 [2024-11-03 10:11:38.609001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.257 [2024-11-03 10:11:38.609017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:10.257 [2024-11-03 10:11:38.609026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:10.257 [2024-11-03 10:11:38.609033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.257 [2024-11-03 10:11:38.615376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.257 [2024-11-03 10:11:38.615410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.257 [2024-11-03 10:11:38.615419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.300 ms 00:17:10.257 [2024-11-03 10:11:38.615427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.257 [2024-11-03 10:11:38.615542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.257 [2024-11-03 10:11:38.615555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.257 [2024-11-03 10:11:38.615564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:10.257 [2024-11-03 10:11:38.615572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.257 [2024-11-03 10:11:38.615604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.257 [2024-11-03 10:11:38.615618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:10.257 [2024-11-03 10:11:38.615630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:10.257 [2024-11-03 10:11:38.615640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.257 [2024-11-03 10:11:38.615666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:10.519 [2024-11-03 10:11:38.617359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.519 [2024-11-03 10:11:38.617389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.519 [2024-11-03 10:11:38.617398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:17:10.519 [2024-11-03 10:11:38.617405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.519 [2024-11-03 10:11:38.617441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.519 [2024-11-03 10:11:38.617452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:10.519 [2024-11-03 10:11:38.617463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:10.519 [2024-11-03 10:11:38.617471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.519 [2024-11-03 10:11:38.617488] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:10.519 [2024-11-03 10:11:38.617510] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:10.519 [2024-11-03 10:11:38.617553] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:10.519 [2024-11-03 10:11:38.617568] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:10.520 [2024-11-03 10:11:38.617675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:10.520 [2024-11-03 10:11:38.617686] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:10.520 [2024-11-03 10:11:38.617697] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:10.520 [2024-11-03 10:11:38.617711] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:10.520 [2024-11-03 10:11:38.617720] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:10.520 [2024-11-03 10:11:38.617728] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:10.520 [2024-11-03 10:11:38.617736] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:10.520 [2024-11-03 10:11:38.617744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:10.520 [2024-11-03 10:11:38.617751] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:10.520 [2024-11-03 10:11:38.617759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.520 [2024-11-03 10:11:38.617769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:10.520 [2024-11-03 10:11:38.617779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:10.520 [2024-11-03 10:11:38.617786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.520 [2024-11-03 10:11:38.617876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.520 [2024-11-03 10:11:38.617884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:10.520 [2024-11-03 10:11:38.617896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:10.520 [2024-11-03 10:11:38.617904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.520 [2024-11-03 10:11:38.618002] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:10.520 [2024-11-03 10:11:38.618012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:10.520 [2024-11-03 10:11:38.618022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:10.520 [2024-11-03 10:11:38.618051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:10.520 [2024-11-03 10:11:38.618078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.520 [2024-11-03 10:11:38.618095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:10.520 [2024-11-03 10:11:38.618102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:10.520 [2024-11-03 10:11:38.618110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.520 [2024-11-03 10:11:38.618118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:10.520 [2024-11-03 10:11:38.618126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:10.520 [2024-11-03 10:11:38.618134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:10.520 [2024-11-03 10:11:38.618149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:10.520 [2024-11-03 10:11:38.618173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:10.520 [2024-11-03 10:11:38.618196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:10.520 [2024-11-03 10:11:38.618245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:10.520 [2024-11-03 10:11:38.618269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:10.520 [2024-11-03 10:11:38.618293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.520 [2024-11-03 10:11:38.618308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:10.520 [2024-11-03 10:11:38.618316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:10.520 [2024-11-03 10:11:38.618323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.520 [2024-11-03 10:11:38.618331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:10.520 [2024-11-03 10:11:38.618339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:10.520 [2024-11-03 10:11:38.618347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:10.520 [2024-11-03 10:11:38.618365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:10.520 [2024-11-03 10:11:38.618373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618381] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:10.520 [2024-11-03 10:11:38.618390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:10.520 [2024-11-03 10:11:38.618403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.520 [2024-11-03 10:11:38.618418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:10.520 [2024-11-03 10:11:38.618425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:10.520 [2024-11-03 10:11:38.618432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:10.520 [2024-11-03 10:11:38.618439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:10.520 [2024-11-03 10:11:38.618446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:10.520 [2024-11-03 10:11:38.618453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:10.520 [2024-11-03 10:11:38.618469] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:10.520 [2024-11-03 10:11:38.618479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.520 [2024-11-03 10:11:38.618488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:10.520 [2024-11-03 10:11:38.618499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:10.520 [2024-11-03 10:11:38.618506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:10.520 [2024-11-03 10:11:38.618514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:10.520 [2024-11-03 10:11:38.618521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:10.520 [2024-11-03 10:11:38.618528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:10.520 [2024-11-03 10:11:38.618535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:10.520 [2024-11-03 10:11:38.618548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:10.520 [2024-11-03 10:11:38.618555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:10.520 [2024-11-03 10:11:38.618562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:10.520 [2024-11-03 10:11:38.618569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:10.521 [2024-11-03 10:11:38.618576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:10.521 [2024-11-03 10:11:38.618583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:10.521 [2024-11-03 10:11:38.618590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:10.521 [2024-11-03 10:11:38.618597] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:10.521 [2024-11-03 10:11:38.618605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.521 [2024-11-03 10:11:38.618613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:10.521 [2024-11-03 10:11:38.618622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:10.521 [2024-11-03 10:11:38.618630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:10.521 [2024-11-03 10:11:38.618637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:10.521 [2024-11-03 10:11:38.618644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.618651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:10.521 [2024-11-03 10:11:38.618664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:17:10.521 [2024-11-03 10:11:38.618671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.640335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.640386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.521 [2024-11-03 10:11:38.640399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.611 ms 00:17:10.521 [2024-11-03 10:11:38.640408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.640564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.640576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:10.521 [2024-11-03 10:11:38.640586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:10.521 [2024-11-03 10:11:38.640597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.650718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.650764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.521 [2024-11-03 10:11:38.650775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.098 ms 00:17:10.521 [2024-11-03 10:11:38.650783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.650851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.650861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.521 [2024-11-03 10:11:38.650874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:10.521 [2024-11-03 10:11:38.650882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.651340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.651364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.521 [2024-11-03 10:11:38.651375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:17:10.521 [2024-11-03 10:11:38.651384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.651528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.651542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.521 [2024-11-03 10:11:38.651552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:17:10.521 [2024-11-03 10:11:38.651571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.657955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.658002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.521 [2024-11-03 10:11:38.658016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.360 ms 00:17:10.521 [2024-11-03 10:11:38.658028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.661367] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:10.521 [2024-11-03 10:11:38.661410] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:10.521 [2024-11-03 10:11:38.661428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.661436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:10.521 [2024-11-03 10:11:38.661445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:17:10.521 [2024-11-03 10:11:38.661452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.676658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.676705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:10.521 [2024-11-03 10:11:38.676716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.137 ms 00:17:10.521 [2024-11-03 10:11:38.676728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.679267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.679419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:10.521 [2024-11-03 10:11:38.679437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:17:10.521 [2024-11-03 10:11:38.679444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.681922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.681963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:10.521 [2024-11-03 10:11:38.681981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:17:10.521 [2024-11-03 10:11:38.681989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.682353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.682367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:10.521 [2024-11-03 10:11:38.682376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:10.521 [2024-11-03 10:11:38.682383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.704689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.704739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:10.521 [2024-11-03 10:11:38.704750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.281 ms 00:17:10.521 [2024-11-03 10:11:38.704758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.712717] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:10.521 [2024-11-03 10:11:38.730157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.730201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:10.521 [2024-11-03 10:11:38.730212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.316 ms 00:17:10.521 [2024-11-03 10:11:38.730221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.730328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.730339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:10.521 [2024-11-03 10:11:38.730349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:10.521 [2024-11-03 10:11:38.730362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.730450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.730460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:10.521 [2024-11-03 10:11:38.730469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:10.521 [2024-11-03 10:11:38.730481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.730504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.730514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:10.521 [2024-11-03 10:11:38.730522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:10.521 [2024-11-03 10:11:38.730530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.730568] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:10.521 [2024-11-03 10:11:38.730579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.730592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:10.521 [2024-11-03 10:11:38.730603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:10.521 [2024-11-03 10:11:38.730615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.736202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.736271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:10.521 [2024-11-03 10:11:38.736282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.562 ms 00:17:10.521 [2024-11-03 10:11:38.736290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.736384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.521 [2024-11-03 10:11:38.736394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:10.521 [2024-11-03 10:11:38.736406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:10.521 [2024-11-03 10:11:38.736414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.521 [2024-11-03 10:11:38.737372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.521 [2024-11-03 10:11:38.738638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.754 ms, result 0 00:17:10.521 [2024-11-03 10:11:38.739895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.521 [2024-11-03 10:11:38.747277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:11.475  [2024-11-03T10:11:40.783Z] Copying: 16/256 [MB] (16 MBps) [2024-11-03T10:11:42.173Z] Copying: 30/256 [MB] (13 MBps) [2024-11-03T10:11:42.789Z] Copying: 58/256 [MB] (28 MBps) [2024-11-03T10:11:44.177Z] Copying: 74/256 [MB] (15 MBps) [2024-11-03T10:11:45.121Z] Copying: 91/256 [MB] (17 MBps) [2024-11-03T10:11:46.065Z] Copying: 109/256 [MB] (17 MBps) [2024-11-03T10:11:47.009Z] Copying: 123/256 [MB] (14 MBps) [2024-11-03T10:11:47.952Z] Copying: 139/256 [MB] (15 MBps) [2024-11-03T10:11:48.896Z] Copying: 151/256 [MB] (12 MBps) [2024-11-03T10:11:49.838Z] Copying: 167/256 [MB] (16 MBps) [2024-11-03T10:11:50.780Z] Copying: 204/256 [MB] (36 MBps) [2024-11-03T10:11:52.167Z] Copying: 229/256 [MB] (25 MBps) [2024-11-03T10:11:53.111Z] Copying: 245368/262144 [kB] (10160 kBps) [2024-11-03T10:11:53.111Z] Copying: 252/256 [MB] (13 MBps) [2024-11-03T10:11:53.111Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-03 10:11:52.968127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:24.749 [2024-11-03 10:11:52.970050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.970101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:24.749 [2024-11-03 10:11:52.970116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:24.749 [2024-11-03 10:11:52.970130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:52.970152] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:24.749 [2024-11-03 10:11:52.970842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.970875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:24.749 [2024-11-03 10:11:52.970888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:17:24.749 [2024-11-03 10:11:52.970899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:52.973576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.973623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:24.749 [2024-11-03 10:11:52.973634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:17:24.749 [2024-11-03 10:11:52.973642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:52.981125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.981175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:24.749 [2024-11-03 10:11:52.981186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.462 ms 00:17:24.749 [2024-11-03 10:11:52.981194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:52.988142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.988343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:24.749 [2024-11-03 10:11:52.988363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:17:24.749 [2024-11-03 10:11:52.988371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:52.991017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.991059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:24.749 [2024-11-03 10:11:52.991069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:17:24.749 [2024-11-03 10:11:52.991076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:52.996554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.996603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:24.749 [2024-11-03 10:11:52.996621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.433 ms 00:17:24.749 [2024-11-03 10:11:52.996629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:52.996764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.996775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:24.749 [2024-11-03 10:11:52.996783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:24.749 [2024-11-03 10:11:52.996791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:52.999609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:52.999656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:24.749 [2024-11-03 10:11:52.999666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:17:24.749 [2024-11-03 10:11:52.999673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:53.001634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:53.001681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:24.749 [2024-11-03 10:11:53.001691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.916 ms 00:17:24.749 [2024-11-03 10:11:53.001697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:53.003309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:53.003350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:24.749 [2024-11-03 10:11:53.003359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:17:24.749 [2024-11-03 10:11:53.003366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:53.005019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.749 [2024-11-03 10:11:53.005068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.749 [2024-11-03 10:11:53.005077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:17:24.749 [2024-11-03 10:11:53.005085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.749 [2024-11-03 10:11:53.005126] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.749 [2024-11-03 10:11:53.005150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.749 [2024-11-03 10:11:53.005161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.750 [2024-11-03 10:11:53.005900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.751 [2024-11-03 10:11:53.005983] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.751 [2024-11-03 10:11:53.005991] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4dcf88cb-9c36-4fbc-b806-2f618a8df3cb 00:17:24.751 [2024-11-03 10:11:53.005999] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.751 [2024-11-03 10:11:53.006008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.751 [2024-11-03 10:11:53.006015] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.751 [2024-11-03 10:11:53.006023] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.751 [2024-11-03 10:11:53.006034] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.751 [2024-11-03 10:11:53.006045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.751 [2024-11-03 10:11:53.006052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.751 [2024-11-03 10:11:53.006059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.751 [2024-11-03 10:11:53.006065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.751 [2024-11-03 10:11:53.006072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.751 [2024-11-03 10:11:53.006080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.751 [2024-11-03 10:11:53.006091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:17:24.751 [2024-11-03 10:11:53.006104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.008403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.751 [2024-11-03 10:11:53.008434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.751 [2024-11-03 10:11:53.008445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:17:24.751 [2024-11-03 10:11:53.008455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.008579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.751 [2024-11-03 10:11:53.008595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.751 [2024-11-03 10:11:53.008604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:24.751 [2024-11-03 10:11:53.008612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.015614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.015658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.751 [2024-11-03 10:11:53.015669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.015677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.015741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.015756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.751 [2024-11-03 10:11:53.015764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.015771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.015817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.015831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.751 [2024-11-03 10:11:53.015839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.015846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.015865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.015873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.751 [2024-11-03 10:11:53.015883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.015890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.028622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.028677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.751 [2024-11-03 10:11:53.028688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.028696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.038770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.038974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.751 [2024-11-03 10:11:53.039001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.039009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.039061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.039070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.751 [2024-11-03 10:11:53.039079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.039087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.039124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.039133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.751 [2024-11-03 10:11:53.039141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.039151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.039399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.039435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.751 [2024-11-03 10:11:53.039456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.039484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.039543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.039566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.751 [2024-11-03 10:11:53.039585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.039686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.039740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.039755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.751 [2024-11-03 10:11:53.039764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.039773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.039817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.751 [2024-11-03 10:11:53.039831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.751 [2024-11-03 10:11:53.039840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.751 [2024-11-03 10:11:53.039850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.751 [2024-11-03 10:11:53.039994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.921 ms, result 0 00:17:25.012 00:17:25.012 00:17:25.012 10:11:53 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85490 00:17:25.012 10:11:53 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85490 00:17:25.012 10:11:53 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85490 ']' 00:17:25.012 10:11:53 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:25.012 10:11:53 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.012 10:11:53 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.012 10:11:53 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.012 10:11:53 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.012 10:11:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:25.012 [2024-11-03 10:11:53.358606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:25.012 [2024-11-03 10:11:53.358787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85490 ] 00:17:25.272 [2024-11-03 10:11:53.493236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.272 [2024-11-03 10:11:53.543822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.217 10:11:54 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.217 10:11:54 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:26.217 10:11:54 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:26.217 [2024-11-03 10:11:54.416953] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.217 [2024-11-03 10:11:54.417037] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.480 [2024-11-03 10:11:54.597592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.597663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.480 [2024-11-03 10:11:54.597682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.480 [2024-11-03 10:11:54.597693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.600271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.600321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.480 [2024-11-03 10:11:54.600351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:17:26.480 [2024-11-03 10:11:54.600365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.600474] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.480 [2024-11-03 10:11:54.600741] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.480 [2024-11-03 10:11:54.600762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.600775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.480 [2024-11-03 10:11:54.600792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:17:26.480 [2024-11-03 10:11:54.600802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.602577] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.480 [2024-11-03 10:11:54.606621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.606685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.480 [2024-11-03 10:11:54.606698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.041 ms 00:17:26.480 [2024-11-03 10:11:54.606706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.606810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.606822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.480 [2024-11-03 10:11:54.606840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:26.480 [2024-11-03 10:11:54.606848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.615257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.615299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.480 [2024-11-03 10:11:54.615312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.355 ms 00:17:26.480 [2024-11-03 10:11:54.615320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.615461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.615476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.480 [2024-11-03 10:11:54.615492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:26.480 [2024-11-03 10:11:54.615499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.615529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.615541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.480 [2024-11-03 10:11:54.615551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:26.480 [2024-11-03 10:11:54.615562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.615587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.480 [2024-11-03 10:11:54.617886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.617930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.480 [2024-11-03 10:11:54.617941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.307 ms 00:17:26.480 [2024-11-03 10:11:54.617953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.617998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.480 [2024-11-03 10:11:54.618010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.480 [2024-11-03 10:11:54.618019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:26.480 [2024-11-03 10:11:54.618030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.480 [2024-11-03 10:11:54.618053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.480 [2024-11-03 10:11:54.618077] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.480 [2024-11-03 10:11:54.618120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.480 [2024-11-03 10:11:54.618143] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.480 [2024-11-03 10:11:54.618270] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.480 [2024-11-03 10:11:54.618286] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.480 [2024-11-03 10:11:54.618302] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.480 [2024-11-03 10:11:54.618317] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618327] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618344] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.481 [2024-11-03 10:11:54.618353] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.481 [2024-11-03 10:11:54.618388] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.481 [2024-11-03 10:11:54.618399] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.481 [2024-11-03 10:11:54.618410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.481 [2024-11-03 10:11:54.618419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.481 [2024-11-03 10:11:54.618431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:17:26.481 [2024-11-03 10:11:54.618438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.481 [2024-11-03 10:11:54.618528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.481 [2024-11-03 10:11:54.618537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.481 [2024-11-03 10:11:54.618546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:26.481 [2024-11-03 10:11:54.618555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.481 [2024-11-03 10:11:54.618660] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.481 [2024-11-03 10:11:54.618670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.481 [2024-11-03 10:11:54.618683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.481 [2024-11-03 10:11:54.618710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.481 [2024-11-03 10:11:54.618737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.481 [2024-11-03 10:11:54.618753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.481 [2024-11-03 10:11:54.618760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.481 [2024-11-03 10:11:54.618769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.481 [2024-11-03 10:11:54.618776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.481 [2024-11-03 10:11:54.618786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.481 [2024-11-03 10:11:54.618792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.481 [2024-11-03 10:11:54.618807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.481 [2024-11-03 10:11:54.618833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.481 [2024-11-03 10:11:54.618860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.481 [2024-11-03 10:11:54.618885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.481 [2024-11-03 10:11:54.618907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.481 [2024-11-03 10:11:54.618922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.481 [2024-11-03 10:11:54.618931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.481 [2024-11-03 10:11:54.618946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.481 [2024-11-03 10:11:54.618952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.481 [2024-11-03 10:11:54.618964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.481 [2024-11-03 10:11:54.618971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.481 [2024-11-03 10:11:54.618979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.481 [2024-11-03 10:11:54.618985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.481 [2024-11-03 10:11:54.618994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.481 [2024-11-03 10:11:54.619001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.481 [2024-11-03 10:11:54.619010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.481 [2024-11-03 10:11:54.619017] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.481 [2024-11-03 10:11:54.619030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.481 [2024-11-03 10:11:54.619041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.481 [2024-11-03 10:11:54.619050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.481 [2024-11-03 10:11:54.619059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.481 [2024-11-03 10:11:54.619068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.481 [2024-11-03 10:11:54.619075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.481 [2024-11-03 10:11:54.619084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.481 [2024-11-03 10:11:54.619091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.481 [2024-11-03 10:11:54.619102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.481 [2024-11-03 10:11:54.619110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.481 [2024-11-03 10:11:54.619123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.481 [2024-11-03 10:11:54.619132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.481 [2024-11-03 10:11:54.619142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.481 [2024-11-03 10:11:54.619150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.481 [2024-11-03 10:11:54.619160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.481 [2024-11-03 10:11:54.619168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.481 [2024-11-03 10:11:54.619178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.481 [2024-11-03 10:11:54.619186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.481 [2024-11-03 10:11:54.619195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.481 [2024-11-03 10:11:54.619203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.481 [2024-11-03 10:11:54.619213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.481 [2024-11-03 10:11:54.619252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.481 [2024-11-03 10:11:54.619263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.481 [2024-11-03 10:11:54.619271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.481 [2024-11-03 10:11:54.619282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.481 [2024-11-03 10:11:54.619296] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.481 [2024-11-03 10:11:54.619307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.481 [2024-11-03 10:11:54.619315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.481 [2024-11-03 10:11:54.619325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.481 [2024-11-03 10:11:54.619334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.481 [2024-11-03 10:11:54.619343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.481 [2024-11-03 10:11:54.619351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.481 [2024-11-03 10:11:54.619364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.481 [2024-11-03 10:11:54.619374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:17:26.481 [2024-11-03 10:11:54.619383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.481 [2024-11-03 10:11:54.634130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.481 [2024-11-03 10:11:54.634385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.481 [2024-11-03 10:11:54.634407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.660 ms 00:17:26.481 [2024-11-03 10:11:54.634418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.481 [2024-11-03 10:11:54.634556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.481 [2024-11-03 10:11:54.634572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.481 [2024-11-03 10:11:54.634592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:26.482 [2024-11-03 10:11:54.634602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.646342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.646388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.482 [2024-11-03 10:11:54.646404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.716 ms 00:17:26.482 [2024-11-03 10:11:54.646414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.646483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.646498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.482 [2024-11-03 10:11:54.646507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.482 [2024-11-03 10:11:54.646517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.646999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.647035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.482 [2024-11-03 10:11:54.647046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:17:26.482 [2024-11-03 10:11:54.647056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.647202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.647217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.482 [2024-11-03 10:11:54.647247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:26.482 [2024-11-03 10:11:54.647258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.667130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.667209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.482 [2024-11-03 10:11:54.667257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.843 ms 00:17:26.482 [2024-11-03 10:11:54.667274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.671524] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:26.482 [2024-11-03 10:11:54.671583] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.482 [2024-11-03 10:11:54.671597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.671609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.482 [2024-11-03 10:11:54.671619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.141 ms 00:17:26.482 [2024-11-03 10:11:54.671628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.688565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.688628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.482 [2024-11-03 10:11:54.688640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.852 ms 00:17:26.482 [2024-11-03 10:11:54.688652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.691840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.692038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.482 [2024-11-03 10:11:54.692058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.092 ms 00:17:26.482 [2024-11-03 10:11:54.692068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.695218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.695288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.482 [2024-11-03 10:11:54.695298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:17:26.482 [2024-11-03 10:11:54.695307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.695687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.695708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.482 [2024-11-03 10:11:54.695718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:17:26.482 [2024-11-03 10:11:54.695727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.721067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.721130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.482 [2024-11-03 10:11:54.721144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.317 ms 00:17:26.482 [2024-11-03 10:11:54.721157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.729313] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.482 [2024-11-03 10:11:54.747679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.747729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.482 [2024-11-03 10:11:54.747745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.385 ms 00:17:26.482 [2024-11-03 10:11:54.747754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.747844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.747867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.482 [2024-11-03 10:11:54.747879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:26.482 [2024-11-03 10:11:54.747891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.747949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.747961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.482 [2024-11-03 10:11:54.747975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:26.482 [2024-11-03 10:11:54.747983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.748022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.748031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.482 [2024-11-03 10:11:54.748044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.482 [2024-11-03 10:11:54.748052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.748111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.482 [2024-11-03 10:11:54.748122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.748131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.482 [2024-11-03 10:11:54.748139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:26.482 [2024-11-03 10:11:54.748149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.754082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.754137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.482 [2024-11-03 10:11:54.754149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.909 ms 00:17:26.482 [2024-11-03 10:11:54.754160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.754274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.482 [2024-11-03 10:11:54.754288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.482 [2024-11-03 10:11:54.754297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:26.482 [2024-11-03 10:11:54.754312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.482 [2024-11-03 10:11:54.755314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.482 [2024-11-03 10:11:54.756593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.391 ms, result 0 00:17:26.482 [2024-11-03 10:11:54.758770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.482 Some configs were skipped because the RPC state that can call them passed over. 00:17:26.482 10:11:54 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:26.744 [2024-11-03 10:11:54.992639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.744 [2024-11-03 10:11:54.992838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:26.744 [2024-11-03 10:11:54.992912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.410 ms 00:17:26.744 [2024-11-03 10:11:54.992937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.744 [2024-11-03 10:11:54.993001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.782 ms, result 0 00:17:26.744 true 00:17:26.744 10:11:55 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:27.006 [2024-11-03 10:11:55.200182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.006 [2024-11-03 10:11:55.200398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:27.006 [2024-11-03 10:11:55.200421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:17:27.006 [2024-11-03 10:11:55.200431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.006 [2024-11-03 10:11:55.200477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.021 ms, result 0 00:17:27.006 true 00:17:27.006 10:11:55 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85490 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85490 ']' 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85490 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85490 00:17:27.006 killing process with pid 85490 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85490' 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85490 00:17:27.006 10:11:55 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85490 00:17:27.268 [2024-11-03 10:11:55.367630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.367686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:27.268 [2024-11-03 10:11:55.367700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:27.268 [2024-11-03 10:11:55.367709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.367735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:27.268 [2024-11-03 10:11:55.368264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.368286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:27.268 [2024-11-03 10:11:55.368296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:17:27.268 [2024-11-03 10:11:55.368306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.368589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.368602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:27.268 [2024-11-03 10:11:55.368611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:17:27.268 [2024-11-03 10:11:55.368620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.373106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.373142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:27.268 [2024-11-03 10:11:55.373152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.467 ms 00:17:27.268 [2024-11-03 10:11:55.373165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.380514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.380555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:27.268 [2024-11-03 10:11:55.380564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.315 ms 00:17:27.268 [2024-11-03 10:11:55.380575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.382772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.382811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:27.268 [2024-11-03 10:11:55.382821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:17:27.268 [2024-11-03 10:11:55.382829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.387348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.387488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:27.268 [2024-11-03 10:11:55.387505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.481 ms 00:17:27.268 [2024-11-03 10:11:55.387518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.387648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.387659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:27.268 [2024-11-03 10:11:55.387668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:27.268 [2024-11-03 10:11:55.387677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.390485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.390607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:27.268 [2024-11-03 10:11:55.390622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.790 ms 00:17:27.268 [2024-11-03 10:11:55.390636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.393152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.393192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:27.268 [2024-11-03 10:11:55.393202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.467 ms 00:17:27.268 [2024-11-03 10:11:55.393210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.395203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.395253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:27.268 [2024-11-03 10:11:55.395263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.940 ms 00:17:27.268 [2024-11-03 10:11:55.395271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.397284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.268 [2024-11-03 10:11:55.397405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:27.268 [2024-11-03 10:11:55.397419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:17:27.268 [2024-11-03 10:11:55.397427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.268 [2024-11-03 10:11:55.397461] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:27.268 [2024-11-03 10:11:55.397477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:27.268 [2024-11-03 10:11:55.397486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:27.268 [2024-11-03 10:11:55.397498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:27.268 [2024-11-03 10:11:55.397505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:27.268 [2024-11-03 10:11:55.397515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:27.268 [2024-11-03 10:11:55.397522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:27.268 [2024-11-03 10:11:55.397531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.397992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:27.269 [2024-11-03 10:11:55.398204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:27.270 [2024-11-03 10:11:55.398346] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:27.270 [2024-11-03 10:11:55.398365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4dcf88cb-9c36-4fbc-b806-2f618a8df3cb 00:17:27.270 [2024-11-03 10:11:55.398375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:27.270 [2024-11-03 10:11:55.398383] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:27.270 [2024-11-03 10:11:55.398392] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:27.270 [2024-11-03 10:11:55.398402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:27.270 [2024-11-03 10:11:55.398411] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:27.270 [2024-11-03 10:11:55.398418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:27.270 [2024-11-03 10:11:55.398427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:27.270 [2024-11-03 10:11:55.398434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:27.270 [2024-11-03 10:11:55.398442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:27.270 [2024-11-03 10:11:55.398449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.270 [2024-11-03 10:11:55.398461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:27.270 [2024-11-03 10:11:55.398474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:17:27.270 [2024-11-03 10:11:55.398484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.400240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.270 [2024-11-03 10:11:55.400266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:27.270 [2024-11-03 10:11:55.400275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:17:27.270 [2024-11-03 10:11:55.400288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.400382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.270 [2024-11-03 10:11:55.400393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:27.270 [2024-11-03 10:11:55.400401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:27.270 [2024-11-03 10:11:55.400410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.406586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.406633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.270 [2024-11-03 10:11:55.406643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.406652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.406732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.406744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.270 [2024-11-03 10:11:55.406753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.406765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.406809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.406820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.270 [2024-11-03 10:11:55.406830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.406839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.406858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.406868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.270 [2024-11-03 10:11:55.406875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.406885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.417879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.417931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.270 [2024-11-03 10:11:55.417942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.417951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.426368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.426412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.270 [2024-11-03 10:11:55.426423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.426434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.426478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.426495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.270 [2024-11-03 10:11:55.426503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.426514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.426550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.426560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.270 [2024-11-03 10:11:55.426568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.426581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.426651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.426662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.270 [2024-11-03 10:11:55.426671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.426679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.426711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.426723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:27.270 [2024-11-03 10:11:55.426731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.426742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.426782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.426793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.270 [2024-11-03 10:11:55.426804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.426813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.426859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.270 [2024-11-03 10:11:55.426871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.270 [2024-11-03 10:11:55.426879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.270 [2024-11-03 10:11:55.426887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.270 [2024-11-03 10:11:55.427020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.366 ms, result 0 00:17:27.531 10:11:55 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:27.531 10:11:55 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:27.531 [2024-11-03 10:11:55.702916] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:27.532 [2024-11-03 10:11:55.703049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85532 ] 00:17:27.532 [2024-11-03 10:11:55.837774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.532 [2024-11-03 10:11:55.891666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.793 [2024-11-03 10:11:56.002256] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.793 [2024-11-03 10:11:56.002329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.055 [2024-11-03 10:11:56.163161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.055 [2024-11-03 10:11:56.163221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.055 [2024-11-03 10:11:56.163252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.055 [2024-11-03 10:11:56.163260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.055 [2024-11-03 10:11:56.166034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.055 [2024-11-03 10:11:56.166089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.055 [2024-11-03 10:11:56.166103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:17:28.055 [2024-11-03 10:11:56.166115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.055 [2024-11-03 10:11:56.166253] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.055 [2024-11-03 10:11:56.166523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.055 [2024-11-03 10:11:56.166541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.055 [2024-11-03 10:11:56.166550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.055 [2024-11-03 10:11:56.166562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:28.055 [2024-11-03 10:11:56.166573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.055 [2024-11-03 10:11:56.168362] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:28.055 [2024-11-03 10:11:56.172314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.055 [2024-11-03 10:11:56.172361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:28.055 [2024-11-03 10:11:56.172378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.954 ms 00:17:28.055 [2024-11-03 10:11:56.172389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.055 [2024-11-03 10:11:56.172472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.055 [2024-11-03 10:11:56.172483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:28.055 [2024-11-03 10:11:56.172492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:28.055 [2024-11-03 10:11:56.172505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.055 [2024-11-03 10:11:56.180804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.055 [2024-11-03 10:11:56.180844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.055 [2024-11-03 10:11:56.180860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.255 ms 00:17:28.055 [2024-11-03 10:11:56.180871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.055 [2024-11-03 10:11:56.181012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.055 [2024-11-03 10:11:56.181023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.055 [2024-11-03 10:11:56.181033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:28.055 [2024-11-03 10:11:56.181041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.055 [2024-11-03 10:11:56.181067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.055 [2024-11-03 10:11:56.181076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.055 [2024-11-03 10:11:56.181087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:28.056 [2024-11-03 10:11:56.181100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.056 [2024-11-03 10:11:56.181120] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:28.056 [2024-11-03 10:11:56.183178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.056 [2024-11-03 10:11:56.183200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.056 [2024-11-03 10:11:56.183210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:17:28.056 [2024-11-03 10:11:56.183217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.056 [2024-11-03 10:11:56.183281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.056 [2024-11-03 10:11:56.183294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.056 [2024-11-03 10:11:56.183306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:28.056 [2024-11-03 10:11:56.183314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.056 [2024-11-03 10:11:56.183332] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:28.056 [2024-11-03 10:11:56.183352] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:28.056 [2024-11-03 10:11:56.183394] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:28.056 [2024-11-03 10:11:56.183410] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:28.056 [2024-11-03 10:11:56.183519] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:28.056 [2024-11-03 10:11:56.183529] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.056 [2024-11-03 10:11:56.183540] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:28.056 [2024-11-03 10:11:56.183550] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.056 [2024-11-03 10:11:56.183561] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.056 [2024-11-03 10:11:56.183569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:28.056 [2024-11-03 10:11:56.183577] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.056 [2024-11-03 10:11:56.183588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:28.056 [2024-11-03 10:11:56.183596] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:28.056 [2024-11-03 10:11:56.183604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.056 [2024-11-03 10:11:56.183614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.056 [2024-11-03 10:11:56.183625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:28.056 [2024-11-03 10:11:56.183632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.056 [2024-11-03 10:11:56.183720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.056 [2024-11-03 10:11:56.183730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.056 [2024-11-03 10:11:56.183739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:28.056 [2024-11-03 10:11:56.183746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.056 [2024-11-03 10:11:56.183845] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.056 [2024-11-03 10:11:56.183856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.056 [2024-11-03 10:11:56.183866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.056 [2024-11-03 10:11:56.183878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.056 [2024-11-03 10:11:56.183887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.056 [2024-11-03 10:11:56.183895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.056 [2024-11-03 10:11:56.183903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:28.056 [2024-11-03 10:11:56.183911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.056 [2024-11-03 10:11:56.183922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:28.056 [2024-11-03 10:11:56.183930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.056 [2024-11-03 10:11:56.183937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.056 [2024-11-03 10:11:56.183945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:28.056 [2024-11-03 10:11:56.183952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.056 [2024-11-03 10:11:56.183960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.056 [2024-11-03 10:11:56.183968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:28.056 [2024-11-03 10:11:56.183975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.056 [2024-11-03 10:11:56.183983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.056 [2024-11-03 10:11:56.183991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:28.056 [2024-11-03 10:11:56.183998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.056 [2024-11-03 10:11:56.184016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.056 [2024-11-03 10:11:56.184031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.056 [2024-11-03 10:11:56.184040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.056 [2024-11-03 10:11:56.184060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.056 [2024-11-03 10:11:56.184068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.056 [2024-11-03 10:11:56.184100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.056 [2024-11-03 10:11:56.184108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.056 [2024-11-03 10:11:56.184124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.056 [2024-11-03 10:11:56.184131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.056 [2024-11-03 10:11:56.184148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.056 [2024-11-03 10:11:56.184156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:28.056 [2024-11-03 10:11:56.184163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.056 [2024-11-03 10:11:56.184171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:28.056 [2024-11-03 10:11:56.184180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:28.056 [2024-11-03 10:11:56.184187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:28.056 [2024-11-03 10:11:56.184207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:28.056 [2024-11-03 10:11:56.184215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184222] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.056 [2024-11-03 10:11:56.184247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.056 [2024-11-03 10:11:56.184263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.056 [2024-11-03 10:11:56.184274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.056 [2024-11-03 10:11:56.184283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.056 [2024-11-03 10:11:56.184291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.056 [2024-11-03 10:11:56.184298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.056 [2024-11-03 10:11:56.184305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.056 [2024-11-03 10:11:56.184312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.056 [2024-11-03 10:11:56.184319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.056 [2024-11-03 10:11:56.184328] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.056 [2024-11-03 10:11:56.184337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.056 [2024-11-03 10:11:56.184350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:28.056 [2024-11-03 10:11:56.184360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:28.056 [2024-11-03 10:11:56.184368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:28.056 [2024-11-03 10:11:56.184376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:28.056 [2024-11-03 10:11:56.184383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:28.056 [2024-11-03 10:11:56.184390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:28.056 [2024-11-03 10:11:56.184397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:28.056 [2024-11-03 10:11:56.184409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:28.056 [2024-11-03 10:11:56.184416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:28.056 [2024-11-03 10:11:56.184423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:28.056 [2024-11-03 10:11:56.184430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:28.056 [2024-11-03 10:11:56.184437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:28.056 [2024-11-03 10:11:56.184444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:28.056 [2024-11-03 10:11:56.184451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:28.057 [2024-11-03 10:11:56.184458] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.057 [2024-11-03 10:11:56.184470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.057 [2024-11-03 10:11:56.184478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.057 [2024-11-03 10:11:56.184488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.057 [2024-11-03 10:11:56.184495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.057 [2024-11-03 10:11:56.184502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.057 [2024-11-03 10:11:56.184510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.184518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.057 [2024-11-03 10:11:56.184528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:17:28.057 [2024-11-03 10:11:56.184535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.209164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.209218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.057 [2024-11-03 10:11:56.209249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.573 ms 00:17:28.057 [2024-11-03 10:11:56.209266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.209427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.209440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.057 [2024-11-03 10:11:56.209450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:28.057 [2024-11-03 10:11:56.209466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.221299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.221337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.057 [2024-11-03 10:11:56.221347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.809 ms 00:17:28.057 [2024-11-03 10:11:56.221356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.221427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.221437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.057 [2024-11-03 10:11:56.221452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.057 [2024-11-03 10:11:56.221463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.221991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.222023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.057 [2024-11-03 10:11:56.222035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:17:28.057 [2024-11-03 10:11:56.222044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.222205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.222216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.057 [2024-11-03 10:11:56.222256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:17:28.057 [2024-11-03 10:11:56.222273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.229326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.229362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.057 [2024-11-03 10:11:56.229371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.027 ms 00:17:28.057 [2024-11-03 10:11:56.229379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.233087] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:28.057 [2024-11-03 10:11:56.233135] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:28.057 [2024-11-03 10:11:56.233153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.233161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:28.057 [2024-11-03 10:11:56.233170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:17:28.057 [2024-11-03 10:11:56.233177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.249060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.249100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:28.057 [2024-11-03 10:11:56.249110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.806 ms 00:17:28.057 [2024-11-03 10:11:56.249119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.251891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.251935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:28.057 [2024-11-03 10:11:56.251946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:17:28.057 [2024-11-03 10:11:56.251953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.254575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.254614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:28.057 [2024-11-03 10:11:56.254633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:17:28.057 [2024-11-03 10:11:56.254641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.255009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.255022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.057 [2024-11-03 10:11:56.255033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:28.057 [2024-11-03 10:11:56.255041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.279119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.279175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:28.057 [2024-11-03 10:11:56.279188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.055 ms 00:17:28.057 [2024-11-03 10:11:56.279200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.287464] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:28.057 [2024-11-03 10:11:56.305969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.306012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.057 [2024-11-03 10:11:56.306025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.663 ms 00:17:28.057 [2024-11-03 10:11:56.306034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.306132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.306147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:28.057 [2024-11-03 10:11:56.306157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:28.057 [2024-11-03 10:11:56.306175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.306264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.306276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.057 [2024-11-03 10:11:56.306285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:28.057 [2024-11-03 10:11:56.306293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.306320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.306330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.057 [2024-11-03 10:11:56.306338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:28.057 [2024-11-03 10:11:56.306346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.306380] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:28.057 [2024-11-03 10:11:56.306390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.306401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:28.057 [2024-11-03 10:11:56.306416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:28.057 [2024-11-03 10:11:56.306427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.312184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.312241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.057 [2024-11-03 10:11:56.312252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.734 ms 00:17:28.057 [2024-11-03 10:11:56.312260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.312353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.057 [2024-11-03 10:11:56.312366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.057 [2024-11-03 10:11:56.312376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:28.057 [2024-11-03 10:11:56.312383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.057 [2024-11-03 10:11:56.313486] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.057 [2024-11-03 10:11:56.314762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.008 ms, result 0 00:17:28.057 [2024-11-03 10:11:56.315888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.057 [2024-11-03 10:11:56.323408] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.000  [2024-11-03T10:11:58.751Z] Copying: 14/256 [MB] (14 MBps) [2024-11-03T10:11:59.358Z] Copying: 30/256 [MB] (16 MBps) [2024-11-03T10:12:00.744Z] Copying: 47/256 [MB] (17 MBps) [2024-11-03T10:12:01.687Z] Copying: 58/256 [MB] (10 MBps) [2024-11-03T10:12:02.629Z] Copying: 71/256 [MB] (13 MBps) [2024-11-03T10:12:03.572Z] Copying: 82/256 [MB] (11 MBps) [2024-11-03T10:12:04.515Z] Copying: 92/256 [MB] (10 MBps) [2024-11-03T10:12:05.459Z] Copying: 105/256 [MB] (13 MBps) [2024-11-03T10:12:06.404Z] Copying: 116/256 [MB] (10 MBps) [2024-11-03T10:12:07.347Z] Copying: 137/256 [MB] (21 MBps) [2024-11-03T10:12:08.736Z] Copying: 150/256 [MB] (13 MBps) [2024-11-03T10:12:09.386Z] Copying: 172/256 [MB] (21 MBps) [2024-11-03T10:12:10.330Z] Copying: 188/256 [MB] (16 MBps) [2024-11-03T10:12:11.716Z] Copying: 205/256 [MB] (16 MBps) [2024-11-03T10:12:12.657Z] Copying: 220/256 [MB] (15 MBps) [2024-11-03T10:12:13.604Z] Copying: 235/256 [MB] (15 MBps) [2024-11-03T10:12:13.604Z] Copying: 253/256 [MB] (17 MBps) [2024-11-03T10:12:13.604Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-03 10:12:13.423249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:45.242 [2024-11-03 10:12:13.425100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.425160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:45.242 [2024-11-03 10:12:13.425175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:45.242 [2024-11-03 10:12:13.425188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.425211] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:45.242 [2024-11-03 10:12:13.425880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.425923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:45.242 [2024-11-03 10:12:13.425938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:17:45.242 [2024-11-03 10:12:13.425947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.426210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.426222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:45.242 [2024-11-03 10:12:13.426250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:17:45.242 [2024-11-03 10:12:13.426262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.429956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.429982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:45.242 [2024-11-03 10:12:13.429992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.677 ms 00:17:45.242 [2024-11-03 10:12:13.430000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.437021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.437069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:45.242 [2024-11-03 10:12:13.437088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:17:45.242 [2024-11-03 10:12:13.437096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.439519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.439564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:45.242 [2024-11-03 10:12:13.439574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.362 ms 00:17:45.242 [2024-11-03 10:12:13.439582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.444203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.444265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:45.242 [2024-11-03 10:12:13.444283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:17:45.242 [2024-11-03 10:12:13.444291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.444425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.444436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:45.242 [2024-11-03 10:12:13.444444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:45.242 [2024-11-03 10:12:13.444452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.447773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.447820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:45.242 [2024-11-03 10:12:13.447830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:17:45.242 [2024-11-03 10:12:13.447837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.450518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.450561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:45.242 [2024-11-03 10:12:13.450570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:17:45.242 [2024-11-03 10:12:13.450578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.453067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.453117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:45.242 [2024-11-03 10:12:13.453127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:17:45.242 [2024-11-03 10:12:13.453136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.455670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.242 [2024-11-03 10:12:13.455717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:45.242 [2024-11-03 10:12:13.455726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.457 ms 00:17:45.242 [2024-11-03 10:12:13.455732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.242 [2024-11-03 10:12:13.455774] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:45.242 [2024-11-03 10:12:13.455797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:45.242 [2024-11-03 10:12:13.455869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.455995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:45.243 [2024-11-03 10:12:13.456596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:45.244 [2024-11-03 10:12:13.456612] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:45.244 [2024-11-03 10:12:13.456620] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4dcf88cb-9c36-4fbc-b806-2f618a8df3cb 00:17:45.244 [2024-11-03 10:12:13.456628] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:45.244 [2024-11-03 10:12:13.456636] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:45.244 [2024-11-03 10:12:13.456643] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:45.244 [2024-11-03 10:12:13.456651] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:45.244 [2024-11-03 10:12:13.456658] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:45.244 [2024-11-03 10:12:13.456666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:45.244 [2024-11-03 10:12:13.456673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:45.244 [2024-11-03 10:12:13.456679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:45.244 [2024-11-03 10:12:13.456686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:45.244 [2024-11-03 10:12:13.456693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.244 [2024-11-03 10:12:13.456701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:45.244 [2024-11-03 10:12:13.456712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:17:45.244 [2024-11-03 10:12:13.456720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.458969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.244 [2024-11-03 10:12:13.459005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:45.244 [2024-11-03 10:12:13.459028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.211 ms 00:17:45.244 [2024-11-03 10:12:13.459037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.459164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.244 [2024-11-03 10:12:13.459180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:45.244 [2024-11-03 10:12:13.459190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:45.244 [2024-11-03 10:12:13.459200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.466114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.466161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.244 [2024-11-03 10:12:13.466171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.466179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.466283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.466296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.244 [2024-11-03 10:12:13.466305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.466312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.466358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.466373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.244 [2024-11-03 10:12:13.466382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.466389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.466407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.466416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.244 [2024-11-03 10:12:13.466426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.466433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.479497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.479552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.244 [2024-11-03 10:12:13.479563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.479571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.489323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.489374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.244 [2024-11-03 10:12:13.489384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.489393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.489424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.489432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.244 [2024-11-03 10:12:13.489441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.489449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.489481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.489490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.244 [2024-11-03 10:12:13.489498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.489509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.489584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.489595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.244 [2024-11-03 10:12:13.489603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.489611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.489659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.489669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:45.244 [2024-11-03 10:12:13.489677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.489686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.489737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.489747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.244 [2024-11-03 10:12:13.489755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.489763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.489808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.244 [2024-11-03 10:12:13.489819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.244 [2024-11-03 10:12:13.489827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.244 [2024-11-03 10:12:13.489838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.244 [2024-11-03 10:12:13.489985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.857 ms, result 0 00:17:45.518 00:17:45.518 00:17:45.518 10:12:13 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:45.518 10:12:13 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:46.094 10:12:14 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:46.094 [2024-11-03 10:12:14.360284] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:46.094 [2024-11-03 10:12:14.360425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85738 ] 00:17:46.356 [2024-11-03 10:12:14.497341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.356 [2024-11-03 10:12:14.531347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.356 [2024-11-03 10:12:14.625175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.356 [2024-11-03 10:12:14.625263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.620 [2024-11-03 10:12:14.785713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.785771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:46.620 [2024-11-03 10:12:14.785786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:46.620 [2024-11-03 10:12:14.785795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.788338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.788387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.620 [2024-11-03 10:12:14.788405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:17:46.620 [2024-11-03 10:12:14.788413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.788517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:46.620 [2024-11-03 10:12:14.788894] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:46.620 [2024-11-03 10:12:14.788943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.788953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.620 [2024-11-03 10:12:14.788967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:17:46.620 [2024-11-03 10:12:14.788975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.790930] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:46.620 [2024-11-03 10:12:14.794676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.794724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:46.620 [2024-11-03 10:12:14.794747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.749 ms 00:17:46.620 [2024-11-03 10:12:14.794759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.794843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.794855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:46.620 [2024-11-03 10:12:14.794864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:46.620 [2024-11-03 10:12:14.794872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.803137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.803187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.620 [2024-11-03 10:12:14.803197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.215 ms 00:17:46.620 [2024-11-03 10:12:14.803205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.803367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.803381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.620 [2024-11-03 10:12:14.803390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:46.620 [2024-11-03 10:12:14.803399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.803428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.803437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:46.620 [2024-11-03 10:12:14.803452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:46.620 [2024-11-03 10:12:14.803460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.803487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:46.620 [2024-11-03 10:12:14.805470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.805509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.620 [2024-11-03 10:12:14.805519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:17:46.620 [2024-11-03 10:12:14.805526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.805574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.805587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:46.620 [2024-11-03 10:12:14.805599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:46.620 [2024-11-03 10:12:14.805607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.805627] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:46.620 [2024-11-03 10:12:14.805648] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:46.620 [2024-11-03 10:12:14.805690] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:46.620 [2024-11-03 10:12:14.805713] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:46.620 [2024-11-03 10:12:14.805822] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:46.620 [2024-11-03 10:12:14.805833] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:46.620 [2024-11-03 10:12:14.805848] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:46.620 [2024-11-03 10:12:14.805859] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:46.620 [2024-11-03 10:12:14.805868] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:46.620 [2024-11-03 10:12:14.805881] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:46.620 [2024-11-03 10:12:14.805888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:46.620 [2024-11-03 10:12:14.805897] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:46.620 [2024-11-03 10:12:14.805905] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:46.620 [2024-11-03 10:12:14.805913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.805924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:46.620 [2024-11-03 10:12:14.805935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:46.620 [2024-11-03 10:12:14.805942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.806028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.620 [2024-11-03 10:12:14.806037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:46.620 [2024-11-03 10:12:14.806045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:46.620 [2024-11-03 10:12:14.806052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.620 [2024-11-03 10:12:14.806152] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:46.620 [2024-11-03 10:12:14.806163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:46.620 [2024-11-03 10:12:14.806172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.620 [2024-11-03 10:12:14.806184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.620 [2024-11-03 10:12:14.806194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:46.620 [2024-11-03 10:12:14.806201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:46.620 [2024-11-03 10:12:14.806209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:46.620 [2024-11-03 10:12:14.806216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:46.620 [2024-11-03 10:12:14.806244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:46.620 [2024-11-03 10:12:14.806252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.620 [2024-11-03 10:12:14.806260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:46.620 [2024-11-03 10:12:14.806267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:46.620 [2024-11-03 10:12:14.806275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.620 [2024-11-03 10:12:14.806283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:46.620 [2024-11-03 10:12:14.806291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:46.620 [2024-11-03 10:12:14.806298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.620 [2024-11-03 10:12:14.806307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:46.620 [2024-11-03 10:12:14.806314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:46.620 [2024-11-03 10:12:14.806322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.620 [2024-11-03 10:12:14.806330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:46.620 [2024-11-03 10:12:14.806338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:46.620 [2024-11-03 10:12:14.806345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.620 [2024-11-03 10:12:14.806354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:46.620 [2024-11-03 10:12:14.806361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:46.620 [2024-11-03 10:12:14.806378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.620 [2024-11-03 10:12:14.806385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:46.620 [2024-11-03 10:12:14.806394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:46.620 [2024-11-03 10:12:14.806403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.621 [2024-11-03 10:12:14.806411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:46.621 [2024-11-03 10:12:14.806422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:46.621 [2024-11-03 10:12:14.806430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.621 [2024-11-03 10:12:14.806438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:46.621 [2024-11-03 10:12:14.806445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:46.621 [2024-11-03 10:12:14.806453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.621 [2024-11-03 10:12:14.806461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:46.621 [2024-11-03 10:12:14.806469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:46.621 [2024-11-03 10:12:14.806477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.621 [2024-11-03 10:12:14.806485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:46.621 [2024-11-03 10:12:14.806492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:46.621 [2024-11-03 10:12:14.806500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.621 [2024-11-03 10:12:14.806510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:46.621 [2024-11-03 10:12:14.806519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:46.621 [2024-11-03 10:12:14.806527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.621 [2024-11-03 10:12:14.806535] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:46.621 [2024-11-03 10:12:14.806544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:46.621 [2024-11-03 10:12:14.806556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.621 [2024-11-03 10:12:14.806564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.621 [2024-11-03 10:12:14.806572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:46.621 [2024-11-03 10:12:14.806579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:46.621 [2024-11-03 10:12:14.806585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:46.621 [2024-11-03 10:12:14.806592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:46.621 [2024-11-03 10:12:14.806599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:46.621 [2024-11-03 10:12:14.806606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:46.621 [2024-11-03 10:12:14.806614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:46.621 [2024-11-03 10:12:14.806623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.621 [2024-11-03 10:12:14.806633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:46.621 [2024-11-03 10:12:14.806643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:46.621 [2024-11-03 10:12:14.806650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:46.621 [2024-11-03 10:12:14.806657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:46.621 [2024-11-03 10:12:14.806665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:46.621 [2024-11-03 10:12:14.806673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:46.621 [2024-11-03 10:12:14.806682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:46.621 [2024-11-03 10:12:14.806695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:46.621 [2024-11-03 10:12:14.806704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:46.621 [2024-11-03 10:12:14.806711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:46.621 [2024-11-03 10:12:14.806718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:46.621 [2024-11-03 10:12:14.806726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:46.621 [2024-11-03 10:12:14.806734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:46.621 [2024-11-03 10:12:14.806742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:46.621 [2024-11-03 10:12:14.806749] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:46.621 [2024-11-03 10:12:14.806758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.621 [2024-11-03 10:12:14.806767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:46.621 [2024-11-03 10:12:14.806777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:46.621 [2024-11-03 10:12:14.806784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:46.621 [2024-11-03 10:12:14.806792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:46.621 [2024-11-03 10:12:14.806799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.806807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:46.621 [2024-11-03 10:12:14.806817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:17:46.621 [2024-11-03 10:12:14.806824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.827415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.827463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.621 [2024-11-03 10:12:14.827478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.534 ms 00:17:46.621 [2024-11-03 10:12:14.827488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.827653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.827669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.621 [2024-11-03 10:12:14.827680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:46.621 [2024-11-03 10:12:14.827693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.836499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.836537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.621 [2024-11-03 10:12:14.836548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.780 ms 00:17:46.621 [2024-11-03 10:12:14.836563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.836607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.836618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.621 [2024-11-03 10:12:14.836629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:46.621 [2024-11-03 10:12:14.836636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.836952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.836976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.621 [2024-11-03 10:12:14.836985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:46.621 [2024-11-03 10:12:14.836992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.837123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.837143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.621 [2024-11-03 10:12:14.837155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:46.621 [2024-11-03 10:12:14.837165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.841861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.841890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.621 [2024-11-03 10:12:14.841899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.675 ms 00:17:46.621 [2024-11-03 10:12:14.841912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.844207] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:46.621 [2024-11-03 10:12:14.844257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:46.621 [2024-11-03 10:12:14.844268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.844276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:46.621 [2024-11-03 10:12:14.844284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:17:46.621 [2024-11-03 10:12:14.844291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.858818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.858852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:46.621 [2024-11-03 10:12:14.858863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.485 ms 00:17:46.621 [2024-11-03 10:12:14.858870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.860820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.860850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:46.621 [2024-11-03 10:12:14.860858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.882 ms 00:17:46.621 [2024-11-03 10:12:14.860865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.621 [2024-11-03 10:12:14.862832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.621 [2024-11-03 10:12:14.862862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:46.622 [2024-11-03 10:12:14.862870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.931 ms 00:17:46.622 [2024-11-03 10:12:14.862882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.863206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.863218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.622 [2024-11-03 10:12:14.863239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:17:46.622 [2024-11-03 10:12:14.863248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.879547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.879597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:46.622 [2024-11-03 10:12:14.879607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.277 ms 00:17:46.622 [2024-11-03 10:12:14.879615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.887077] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:46.622 [2024-11-03 10:12:14.901462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.901500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:46.622 [2024-11-03 10:12:14.901510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.767 ms 00:17:46.622 [2024-11-03 10:12:14.901518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.901591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.901601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:46.622 [2024-11-03 10:12:14.901610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:46.622 [2024-11-03 10:12:14.901625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.901677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.901686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.622 [2024-11-03 10:12:14.901694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:46.622 [2024-11-03 10:12:14.901701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.901723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.901731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.622 [2024-11-03 10:12:14.901739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:46.622 [2024-11-03 10:12:14.901746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.901778] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:46.622 [2024-11-03 10:12:14.901788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.901797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:46.622 [2024-11-03 10:12:14.901809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:46.622 [2024-11-03 10:12:14.901816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.905650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.905686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.622 [2024-11-03 10:12:14.905696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.814 ms 00:17:46.622 [2024-11-03 10:12:14.905704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.905795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.622 [2024-11-03 10:12:14.905808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.622 [2024-11-03 10:12:14.905816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:46.622 [2024-11-03 10:12:14.905823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.622 [2024-11-03 10:12:14.906625] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.622 [2024-11-03 10:12:14.907638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 120.661 ms, result 0 00:17:46.622 [2024-11-03 10:12:14.908428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:46.622 [2024-11-03 10:12:14.918205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.888  [2024-11-03T10:12:15.250Z] Copying: 4096/4096 [kB] (average 29 MBps)[2024-11-03 10:12:15.055773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:46.888 [2024-11-03 10:12:15.056434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.888 [2024-11-03 10:12:15.056467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:46.888 [2024-11-03 10:12:15.056480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:46.888 [2024-11-03 10:12:15.056488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.888 [2024-11-03 10:12:15.056508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:46.888 [2024-11-03 10:12:15.056916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.888 [2024-11-03 10:12:15.056941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:46.888 [2024-11-03 10:12:15.056950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:17:46.888 [2024-11-03 10:12:15.056957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.888 [2024-11-03 10:12:15.058457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.888 [2024-11-03 10:12:15.058486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:46.888 [2024-11-03 10:12:15.058495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:17:46.888 [2024-11-03 10:12:15.058502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.888 [2024-11-03 10:12:15.062799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.888 [2024-11-03 10:12:15.062826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:46.888 [2024-11-03 10:12:15.062835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.276 ms 00:17:46.888 [2024-11-03 10:12:15.062842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.888 [2024-11-03 10:12:15.069721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.888 [2024-11-03 10:12:15.069750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:46.888 [2024-11-03 10:12:15.069760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:17:46.888 [2024-11-03 10:12:15.069768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.888 [2024-11-03 10:12:15.071856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.888 [2024-11-03 10:12:15.071886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:46.888 [2024-11-03 10:12:15.071895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:17:46.888 [2024-11-03 10:12:15.071901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.888 [2024-11-03 10:12:15.075602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.888 [2024-11-03 10:12:15.075633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:46.889 [2024-11-03 10:12:15.075647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:17:46.889 [2024-11-03 10:12:15.075655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.889 [2024-11-03 10:12:15.075769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.889 [2024-11-03 10:12:15.075778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:46.889 [2024-11-03 10:12:15.075786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:46.889 [2024-11-03 10:12:15.075793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.889 [2024-11-03 10:12:15.078558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.889 [2024-11-03 10:12:15.078589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:46.889 [2024-11-03 10:12:15.078597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:17:46.889 [2024-11-03 10:12:15.078604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.889 [2024-11-03 10:12:15.080119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.889 [2024-11-03 10:12:15.080148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:46.889 [2024-11-03 10:12:15.080156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:17:46.889 [2024-11-03 10:12:15.080162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.889 [2024-11-03 10:12:15.081306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.889 [2024-11-03 10:12:15.081333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:46.889 [2024-11-03 10:12:15.081342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:17:46.889 [2024-11-03 10:12:15.081348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.889 [2024-11-03 10:12:15.082456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.889 [2024-11-03 10:12:15.082487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:46.889 [2024-11-03 10:12:15.082495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:17:46.889 [2024-11-03 10:12:15.082502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.889 [2024-11-03 10:12:15.082542] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:46.889 [2024-11-03 10:12:15.082560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.082990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.083001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.083008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.083016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:46.889 [2024-11-03 10:12:15.083023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:46.890 [2024-11-03 10:12:15.083318] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:46.890 [2024-11-03 10:12:15.083325] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4dcf88cb-9c36-4fbc-b806-2f618a8df3cb 00:17:46.890 [2024-11-03 10:12:15.083333] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:46.890 [2024-11-03 10:12:15.083340] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:46.890 [2024-11-03 10:12:15.083346] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:46.890 [2024-11-03 10:12:15.083353] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:46.890 [2024-11-03 10:12:15.083360] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:46.890 [2024-11-03 10:12:15.083367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:46.890 [2024-11-03 10:12:15.083378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:46.890 [2024-11-03 10:12:15.083384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:46.890 [2024-11-03 10:12:15.083390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:46.890 [2024-11-03 10:12:15.083397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.890 [2024-11-03 10:12:15.083404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:46.890 [2024-11-03 10:12:15.083415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:17:46.890 [2024-11-03 10:12:15.083422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.084570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.890 [2024-11-03 10:12:15.084599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:46.890 [2024-11-03 10:12:15.084608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:17:46.890 [2024-11-03 10:12:15.084615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.084690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.890 [2024-11-03 10:12:15.084701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:46.890 [2024-11-03 10:12:15.084709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:46.890 [2024-11-03 10:12:15.084716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.089173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.890 [2024-11-03 10:12:15.089206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.890 [2024-11-03 10:12:15.089216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.890 [2024-11-03 10:12:15.089241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.089290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.890 [2024-11-03 10:12:15.089303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.890 [2024-11-03 10:12:15.089310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.890 [2024-11-03 10:12:15.089317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.089356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.890 [2024-11-03 10:12:15.089365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.890 [2024-11-03 10:12:15.089372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.890 [2024-11-03 10:12:15.089379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.089395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.890 [2024-11-03 10:12:15.089408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.890 [2024-11-03 10:12:15.089418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.890 [2024-11-03 10:12:15.089428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.097877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.890 [2024-11-03 10:12:15.097916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.890 [2024-11-03 10:12:15.097927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.890 [2024-11-03 10:12:15.097934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.104713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.890 [2024-11-03 10:12:15.104756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.890 [2024-11-03 10:12:15.104770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.890 [2024-11-03 10:12:15.104777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.890 [2024-11-03 10:12:15.104800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.890 [2024-11-03 10:12:15.104809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.890 [2024-11-03 10:12:15.104817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.890 [2024-11-03 10:12:15.104824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.891 [2024-11-03 10:12:15.104852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.891 [2024-11-03 10:12:15.104860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.891 [2024-11-03 10:12:15.104868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.891 [2024-11-03 10:12:15.104877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.891 [2024-11-03 10:12:15.104941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.891 [2024-11-03 10:12:15.104951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.891 [2024-11-03 10:12:15.104958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.891 [2024-11-03 10:12:15.104966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.891 [2024-11-03 10:12:15.104993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.891 [2024-11-03 10:12:15.105002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:46.891 [2024-11-03 10:12:15.105009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.891 [2024-11-03 10:12:15.105016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.891 [2024-11-03 10:12:15.105059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.891 [2024-11-03 10:12:15.105068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.891 [2024-11-03 10:12:15.105079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.891 [2024-11-03 10:12:15.105085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.891 [2024-11-03 10:12:15.105125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.891 [2024-11-03 10:12:15.105134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.891 [2024-11-03 10:12:15.105142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.891 [2024-11-03 10:12:15.105152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.891 [2024-11-03 10:12:15.105344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.875 ms, result 0 00:17:47.183 00:17:47.183 00:17:47.183 10:12:15 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85752 00:17:47.183 10:12:15 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85752 00:17:47.183 10:12:15 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:47.183 10:12:15 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85752 ']' 00:17:47.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.183 10:12:15 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.183 10:12:15 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.183 10:12:15 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.183 10:12:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.183 10:12:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:47.183 [2024-11-03 10:12:15.385216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:47.183 [2024-11-03 10:12:15.385402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85752 ] 00:17:47.183 [2024-11-03 10:12:15.523148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.447 [2024-11-03 10:12:15.573667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.020 10:12:16 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.020 10:12:16 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:48.020 10:12:16 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:48.280 [2024-11-03 10:12:16.448829] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:48.280 [2024-11-03 10:12:16.448910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:48.280 [2024-11-03 10:12:16.625597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.280 [2024-11-03 10:12:16.625663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:48.280 [2024-11-03 10:12:16.625679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:48.280 [2024-11-03 10:12:16.625689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.281 [2024-11-03 10:12:16.628321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.281 [2024-11-03 10:12:16.628370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.281 [2024-11-03 10:12:16.628388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:17:48.281 [2024-11-03 10:12:16.628398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.281 [2024-11-03 10:12:16.628497] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:48.281 [2024-11-03 10:12:16.628866] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:48.281 [2024-11-03 10:12:16.628910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.281 [2024-11-03 10:12:16.628922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.281 [2024-11-03 10:12:16.628939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:17:48.281 [2024-11-03 10:12:16.628949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.281 [2024-11-03 10:12:16.630688] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:48.281 [2024-11-03 10:12:16.634417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.281 [2024-11-03 10:12:16.634469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:48.281 [2024-11-03 10:12:16.634481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:17:48.281 [2024-11-03 10:12:16.634490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.281 [2024-11-03 10:12:16.634590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.281 [2024-11-03 10:12:16.634601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:48.281 [2024-11-03 10:12:16.634615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:48.281 [2024-11-03 10:12:16.634623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.544 [2024-11-03 10:12:16.642635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.544 [2024-11-03 10:12:16.642675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.544 [2024-11-03 10:12:16.642687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.955 ms 00:17:48.544 [2024-11-03 10:12:16.642695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.544 [2024-11-03 10:12:16.642826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.544 [2024-11-03 10:12:16.642837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.544 [2024-11-03 10:12:16.642849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:48.544 [2024-11-03 10:12:16.642857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.544 [2024-11-03 10:12:16.642894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.544 [2024-11-03 10:12:16.642903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:48.544 [2024-11-03 10:12:16.642912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:48.544 [2024-11-03 10:12:16.642926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.544 [2024-11-03 10:12:16.642952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:48.544 [2024-11-03 10:12:16.645021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.544 [2024-11-03 10:12:16.645068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.544 [2024-11-03 10:12:16.645078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:17:48.544 [2024-11-03 10:12:16.645088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.544 [2024-11-03 10:12:16.645135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.544 [2024-11-03 10:12:16.645145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:48.544 [2024-11-03 10:12:16.645154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:48.544 [2024-11-03 10:12:16.645163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.544 [2024-11-03 10:12:16.645187] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:48.544 [2024-11-03 10:12:16.645210] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:48.544 [2024-11-03 10:12:16.645263] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:48.544 [2024-11-03 10:12:16.645284] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:48.544 [2024-11-03 10:12:16.645389] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:48.544 [2024-11-03 10:12:16.645402] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:48.544 [2024-11-03 10:12:16.645417] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:48.544 [2024-11-03 10:12:16.645432] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:48.544 [2024-11-03 10:12:16.645442] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:48.544 [2024-11-03 10:12:16.645454] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:48.544 [2024-11-03 10:12:16.645462] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:48.544 [2024-11-03 10:12:16.645472] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:48.544 [2024-11-03 10:12:16.645483] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:48.544 [2024-11-03 10:12:16.645493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.544 [2024-11-03 10:12:16.645503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:48.544 [2024-11-03 10:12:16.645513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:17:48.544 [2024-11-03 10:12:16.645521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.544 [2024-11-03 10:12:16.645610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.544 [2024-11-03 10:12:16.645625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:48.544 [2024-11-03 10:12:16.645635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:48.544 [2024-11-03 10:12:16.645646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.544 [2024-11-03 10:12:16.645750] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:48.544 [2024-11-03 10:12:16.645762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:48.544 [2024-11-03 10:12:16.645775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.544 [2024-11-03 10:12:16.645785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.544 [2024-11-03 10:12:16.645799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:48.544 [2024-11-03 10:12:16.645806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:48.544 [2024-11-03 10:12:16.645816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:48.544 [2024-11-03 10:12:16.645825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:48.544 [2024-11-03 10:12:16.645835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:48.544 [2024-11-03 10:12:16.645843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.544 [2024-11-03 10:12:16.645853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:48.544 [2024-11-03 10:12:16.645861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:48.544 [2024-11-03 10:12:16.645871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.544 [2024-11-03 10:12:16.645879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:48.544 [2024-11-03 10:12:16.645889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:48.544 [2024-11-03 10:12:16.645897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.544 [2024-11-03 10:12:16.645907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:48.544 [2024-11-03 10:12:16.645915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:48.544 [2024-11-03 10:12:16.645927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.544 [2024-11-03 10:12:16.645936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:48.544 [2024-11-03 10:12:16.645948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:48.544 [2024-11-03 10:12:16.645956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.544 [2024-11-03 10:12:16.645966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:48.544 [2024-11-03 10:12:16.645974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:48.544 [2024-11-03 10:12:16.645983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.544 [2024-11-03 10:12:16.645991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:48.544 [2024-11-03 10:12:16.646001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:48.544 [2024-11-03 10:12:16.646008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.544 [2024-11-03 10:12:16.646018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:48.544 [2024-11-03 10:12:16.646026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:48.544 [2024-11-03 10:12:16.646035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.544 [2024-11-03 10:12:16.646043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:48.544 [2024-11-03 10:12:16.646054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:48.544 [2024-11-03 10:12:16.646062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.544 [2024-11-03 10:12:16.646072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:48.544 [2024-11-03 10:12:16.646079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:48.544 [2024-11-03 10:12:16.646091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.544 [2024-11-03 10:12:16.646100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:48.544 [2024-11-03 10:12:16.646109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:48.545 [2024-11-03 10:12:16.646117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.545 [2024-11-03 10:12:16.646127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:48.545 [2024-11-03 10:12:16.646135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:48.545 [2024-11-03 10:12:16.646145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.545 [2024-11-03 10:12:16.646153] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:48.545 [2024-11-03 10:12:16.646164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:48.545 [2024-11-03 10:12:16.646173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.545 [2024-11-03 10:12:16.646184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.545 [2024-11-03 10:12:16.646194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:48.545 [2024-11-03 10:12:16.646204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:48.545 [2024-11-03 10:12:16.646212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:48.545 [2024-11-03 10:12:16.646274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:48.545 [2024-11-03 10:12:16.646283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:48.545 [2024-11-03 10:12:16.646294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:48.545 [2024-11-03 10:12:16.646303] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:48.545 [2024-11-03 10:12:16.646314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.545 [2024-11-03 10:12:16.646323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:48.545 [2024-11-03 10:12:16.646333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:48.545 [2024-11-03 10:12:16.646341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:48.545 [2024-11-03 10:12:16.646352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:48.545 [2024-11-03 10:12:16.646360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:48.545 [2024-11-03 10:12:16.646370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:48.545 [2024-11-03 10:12:16.646377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:48.545 [2024-11-03 10:12:16.646387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:48.545 [2024-11-03 10:12:16.646394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:48.545 [2024-11-03 10:12:16.646404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:48.545 [2024-11-03 10:12:16.646411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:48.545 [2024-11-03 10:12:16.646420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:48.545 [2024-11-03 10:12:16.646428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:48.545 [2024-11-03 10:12:16.646440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:48.545 [2024-11-03 10:12:16.646459] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:48.545 [2024-11-03 10:12:16.646470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.545 [2024-11-03 10:12:16.646478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:48.545 [2024-11-03 10:12:16.646488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:48.545 [2024-11-03 10:12:16.646496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:48.545 [2024-11-03 10:12:16.646505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:48.545 [2024-11-03 10:12:16.646513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.646526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:48.545 [2024-11-03 10:12:16.646534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:17:48.545 [2024-11-03 10:12:16.646545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.660323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.660367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.545 [2024-11-03 10:12:16.660379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.718 ms 00:17:48.545 [2024-11-03 10:12:16.660389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.660519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.660534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:48.545 [2024-11-03 10:12:16.660545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:48.545 [2024-11-03 10:12:16.660555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.672557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.672609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.545 [2024-11-03 10:12:16.672620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.980 ms 00:17:48.545 [2024-11-03 10:12:16.672633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.672702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.672717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.545 [2024-11-03 10:12:16.672726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.545 [2024-11-03 10:12:16.672736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.673302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.673342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.545 [2024-11-03 10:12:16.673354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:48.545 [2024-11-03 10:12:16.673365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.673521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.673540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.545 [2024-11-03 10:12:16.673555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:48.545 [2024-11-03 10:12:16.673569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.691562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.691626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.545 [2024-11-03 10:12:16.691646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.964 ms 00:17:48.545 [2024-11-03 10:12:16.691658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.695591] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:48.545 [2024-11-03 10:12:16.695646] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:48.545 [2024-11-03 10:12:16.695660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.695670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:48.545 [2024-11-03 10:12:16.695679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:17:48.545 [2024-11-03 10:12:16.695689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.711411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.711467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:48.545 [2024-11-03 10:12:16.711480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.644 ms 00:17:48.545 [2024-11-03 10:12:16.711493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.714123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.714180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:48.545 [2024-11-03 10:12:16.714190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.533 ms 00:17:48.545 [2024-11-03 10:12:16.714199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.716523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.716570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:48.545 [2024-11-03 10:12:16.716580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:17:48.545 [2024-11-03 10:12:16.716590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.716956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.716980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:48.545 [2024-11-03 10:12:16.716994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:48.545 [2024-11-03 10:12:16.717004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.545 [2024-11-03 10:12:16.741633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.545 [2024-11-03 10:12:16.741695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:48.546 [2024-11-03 10:12:16.741708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.605 ms 00:17:48.546 [2024-11-03 10:12:16.741726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.546 [2024-11-03 10:12:16.749917] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:48.546 [2024-11-03 10:12:16.768441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.546 [2024-11-03 10:12:16.768493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:48.546 [2024-11-03 10:12:16.768507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.621 ms 00:17:48.546 [2024-11-03 10:12:16.768520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.546 [2024-11-03 10:12:16.768608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.546 [2024-11-03 10:12:16.768626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:48.546 [2024-11-03 10:12:16.768638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:48.546 [2024-11-03 10:12:16.768649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.546 [2024-11-03 10:12:16.768707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.546 [2024-11-03 10:12:16.768717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:48.546 [2024-11-03 10:12:16.768731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:48.546 [2024-11-03 10:12:16.768739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.546 [2024-11-03 10:12:16.768776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.546 [2024-11-03 10:12:16.768787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:48.546 [2024-11-03 10:12:16.768800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:48.546 [2024-11-03 10:12:16.768807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.546 [2024-11-03 10:12:16.768850] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:48.546 [2024-11-03 10:12:16.768860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.546 [2024-11-03 10:12:16.768870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:48.546 [2024-11-03 10:12:16.768877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:48.546 [2024-11-03 10:12:16.768887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.546 [2024-11-03 10:12:16.774571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.546 [2024-11-03 10:12:16.774627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:48.546 [2024-11-03 10:12:16.774639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.661 ms 00:17:48.546 [2024-11-03 10:12:16.774649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.546 [2024-11-03 10:12:16.774743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.546 [2024-11-03 10:12:16.774756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:48.546 [2024-11-03 10:12:16.774765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:48.546 [2024-11-03 10:12:16.774781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.546 [2024-11-03 10:12:16.777389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.546 [2024-11-03 10:12:16.780570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.953 ms, result 0 00:17:48.546 [2024-11-03 10:12:16.783396] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.546 Some configs were skipped because the RPC state that can call them passed over. 00:17:48.546 10:12:16 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:48.807 [2024-11-03 10:12:17.020611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.807 [2024-11-03 10:12:17.020662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:48.807 [2024-11-03 10:12:17.020677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:17:48.807 [2024-11-03 10:12:17.020686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.807 [2024-11-03 10:12:17.020723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.978 ms, result 0 00:17:48.807 true 00:17:48.807 10:12:17 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:49.068 [2024-11-03 10:12:17.236635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.068 [2024-11-03 10:12:17.236700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:49.068 [2024-11-03 10:12:17.236713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:17:49.068 [2024-11-03 10:12:17.236723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.068 [2024-11-03 10:12:17.236762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.764 ms, result 0 00:17:49.068 true 00:17:49.068 10:12:17 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85752 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85752 ']' 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85752 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85752 00:17:49.068 killing process with pid 85752 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85752' 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85752 00:17:49.068 10:12:17 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85752 00:17:49.068 [2024-11-03 10:12:17.405574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.068 [2024-11-03 10:12:17.405632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.068 [2024-11-03 10:12:17.405647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:49.068 [2024-11-03 10:12:17.405655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.068 [2024-11-03 10:12:17.405681] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:49.068 [2024-11-03 10:12:17.406176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.068 [2024-11-03 10:12:17.406196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.068 [2024-11-03 10:12:17.406205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:17:49.068 [2024-11-03 10:12:17.406215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.068 [2024-11-03 10:12:17.406509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.068 [2024-11-03 10:12:17.406522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.068 [2024-11-03 10:12:17.406531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:49.068 [2024-11-03 10:12:17.406541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.068 [2024-11-03 10:12:17.411437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.068 [2024-11-03 10:12:17.411474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.068 [2024-11-03 10:12:17.411487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.876 ms 00:17:49.068 [2024-11-03 10:12:17.411499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.068 [2024-11-03 10:12:17.418501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.068 [2024-11-03 10:12:17.418540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:49.068 [2024-11-03 10:12:17.418550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.965 ms 00:17:49.068 [2024-11-03 10:12:17.418561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.068 [2024-11-03 10:12:17.420742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.068 [2024-11-03 10:12:17.420785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.068 [2024-11-03 10:12:17.420794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:17:49.068 [2024-11-03 10:12:17.420803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.330 [2024-11-03 10:12:17.429264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.330 [2024-11-03 10:12:17.429306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.330 [2024-11-03 10:12:17.429316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.408 ms 00:17:49.330 [2024-11-03 10:12:17.429325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.331 [2024-11-03 10:12:17.429465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.331 [2024-11-03 10:12:17.429477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.331 [2024-11-03 10:12:17.429486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:49.331 [2024-11-03 10:12:17.429495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.331 [2024-11-03 10:12:17.432732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.331 [2024-11-03 10:12:17.432789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:49.331 [2024-11-03 10:12:17.432801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:17:49.331 [2024-11-03 10:12:17.432813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.331 [2024-11-03 10:12:17.434984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.331 [2024-11-03 10:12:17.435028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:49.331 [2024-11-03 10:12:17.435038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:17:49.331 [2024-11-03 10:12:17.435046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.331 [2024-11-03 10:12:17.436534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.331 [2024-11-03 10:12:17.436577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.331 [2024-11-03 10:12:17.436587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:17:49.331 [2024-11-03 10:12:17.436597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.331 [2024-11-03 10:12:17.438309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.331 [2024-11-03 10:12:17.438352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.331 [2024-11-03 10:12:17.438361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:17:49.331 [2024-11-03 10:12:17.438369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.331 [2024-11-03 10:12:17.438404] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.331 [2024-11-03 10:12:17.438421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.438991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.439001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.439009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.331 [2024-11-03 10:12:17.439018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.332 [2024-11-03 10:12:17.439289] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.332 [2024-11-03 10:12:17.439297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4dcf88cb-9c36-4fbc-b806-2f618a8df3cb 00:17:49.332 [2024-11-03 10:12:17.439307] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.332 [2024-11-03 10:12:17.439315] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.332 [2024-11-03 10:12:17.439323] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.332 [2024-11-03 10:12:17.439338] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.332 [2024-11-03 10:12:17.439346] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.332 [2024-11-03 10:12:17.439359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.332 [2024-11-03 10:12:17.439368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.332 [2024-11-03 10:12:17.439374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.332 [2024-11-03 10:12:17.439382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.332 [2024-11-03 10:12:17.439389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.332 [2024-11-03 10:12:17.439401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.332 [2024-11-03 10:12:17.439409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:17:49.332 [2024-11-03 10:12:17.439420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.441090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.332 [2024-11-03 10:12:17.441124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.332 [2024-11-03 10:12:17.441133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:17:49.332 [2024-11-03 10:12:17.441142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.441265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.332 [2024-11-03 10:12:17.441278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.332 [2024-11-03 10:12:17.441287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:49.332 [2024-11-03 10:12:17.441296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.447291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.447332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.332 [2024-11-03 10:12:17.447342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.447352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.447416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.447427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.332 [2024-11-03 10:12:17.447435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.447446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.447484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.447499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.332 [2024-11-03 10:12:17.447508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.447517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.447535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.447544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.332 [2024-11-03 10:12:17.447551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.447560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.457837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.457885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.332 [2024-11-03 10:12:17.457895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.457904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.465755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.465801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.332 [2024-11-03 10:12:17.465811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.465823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.465863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.465881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.332 [2024-11-03 10:12:17.465889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.465901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.465933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.465944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.332 [2024-11-03 10:12:17.465952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.465961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.466032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.466043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.332 [2024-11-03 10:12:17.466051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.466060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.466092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.466103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.332 [2024-11-03 10:12:17.466113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.466124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.466163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.466173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.332 [2024-11-03 10:12:17.466184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.466192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.466253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.332 [2024-11-03 10:12:17.466266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.332 [2024-11-03 10:12:17.466273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.332 [2024-11-03 10:12:17.466282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.332 [2024-11-03 10:12:17.466416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.818 ms, result 0 00:17:49.332 10:12:17 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:49.594 [2024-11-03 10:12:17.739716] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:49.594 [2024-11-03 10:12:17.739869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85788 ] 00:17:49.594 [2024-11-03 10:12:17.877241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.594 [2024-11-03 10:12:17.926637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.854 [2024-11-03 10:12:18.040008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.854 [2024-11-03 10:12:18.040077] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.854 [2024-11-03 10:12:18.201135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.854 [2024-11-03 10:12:18.201205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:49.855 [2024-11-03 10:12:18.201220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.855 [2024-11-03 10:12:18.201247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-03 10:12:18.203774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.855 [2024-11-03 10:12:18.203833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.855 [2024-11-03 10:12:18.203848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:17:49.855 [2024-11-03 10:12:18.203856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-03 10:12:18.203961] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:49.855 [2024-11-03 10:12:18.204272] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:49.855 [2024-11-03 10:12:18.204292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.855 [2024-11-03 10:12:18.204301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.855 [2024-11-03 10:12:18.204313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:17:49.855 [2024-11-03 10:12:18.204322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-03 10:12:18.206592] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:49.855 [2024-11-03 10:12:18.210467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.855 [2024-11-03 10:12:18.210536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:49.855 [2024-11-03 10:12:18.210551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.879 ms 00:17:49.855 [2024-11-03 10:12:18.210562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-03 10:12:18.210645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.855 [2024-11-03 10:12:18.210656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:49.855 [2024-11-03 10:12:18.210666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:49.855 [2024-11-03 10:12:18.210674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.116 [2024-11-03 10:12:18.219169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.116 [2024-11-03 10:12:18.219218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.116 [2024-11-03 10:12:18.219246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.447 ms 00:17:50.116 [2024-11-03 10:12:18.219255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.116 [2024-11-03 10:12:18.219402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.116 [2024-11-03 10:12:18.219414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.116 [2024-11-03 10:12:18.219424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:50.116 [2024-11-03 10:12:18.219436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.116 [2024-11-03 10:12:18.219464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.116 [2024-11-03 10:12:18.219473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.116 [2024-11-03 10:12:18.219486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:50.116 [2024-11-03 10:12:18.219493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.116 [2024-11-03 10:12:18.219517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:50.116 [2024-11-03 10:12:18.221578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.116 [2024-11-03 10:12:18.221619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.116 [2024-11-03 10:12:18.221630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:17:50.116 [2024-11-03 10:12:18.221638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.116 [2024-11-03 10:12:18.221684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.116 [2024-11-03 10:12:18.221696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.116 [2024-11-03 10:12:18.221707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:50.116 [2024-11-03 10:12:18.221715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.116 [2024-11-03 10:12:18.221733] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:50.116 [2024-11-03 10:12:18.221754] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:50.116 [2024-11-03 10:12:18.221796] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:50.116 [2024-11-03 10:12:18.221813] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:50.116 [2024-11-03 10:12:18.221921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:50.116 [2024-11-03 10:12:18.221932] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.116 [2024-11-03 10:12:18.221943] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:50.116 [2024-11-03 10:12:18.221954] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.116 [2024-11-03 10:12:18.221968] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.116 [2024-11-03 10:12:18.221977] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:50.116 [2024-11-03 10:12:18.221985] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.116 [2024-11-03 10:12:18.221992] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:50.116 [2024-11-03 10:12:18.222000] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:50.116 [2024-11-03 10:12:18.222009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.116 [2024-11-03 10:12:18.222019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.116 [2024-11-03 10:12:18.222029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:50.116 [2024-11-03 10:12:18.222036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.116 [2024-11-03 10:12:18.222124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.116 [2024-11-03 10:12:18.222133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.116 [2024-11-03 10:12:18.222141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:50.116 [2024-11-03 10:12:18.222149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.116 [2024-11-03 10:12:18.222267] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.116 [2024-11-03 10:12:18.222283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.116 [2024-11-03 10:12:18.222293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.116 [2024-11-03 10:12:18.222304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.116 [2024-11-03 10:12:18.222325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:50.116 [2024-11-03 10:12:18.222341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.116 [2024-11-03 10:12:18.222354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.116 [2024-11-03 10:12:18.222372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.116 [2024-11-03 10:12:18.222380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:50.116 [2024-11-03 10:12:18.222388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.116 [2024-11-03 10:12:18.222396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.116 [2024-11-03 10:12:18.222405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:50.116 [2024-11-03 10:12:18.222413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.116 [2024-11-03 10:12:18.222430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:50.116 [2024-11-03 10:12:18.222438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.116 [2024-11-03 10:12:18.222454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.116 [2024-11-03 10:12:18.222470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.116 [2024-11-03 10:12:18.222477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.116 [2024-11-03 10:12:18.222497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.116 [2024-11-03 10:12:18.222505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.116 [2024-11-03 10:12:18.222520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.116 [2024-11-03 10:12:18.222527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.116 [2024-11-03 10:12:18.222543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.116 [2024-11-03 10:12:18.222550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:50.116 [2024-11-03 10:12:18.222557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.116 [2024-11-03 10:12:18.222565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.116 [2024-11-03 10:12:18.222572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:50.116 [2024-11-03 10:12:18.222580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.116 [2024-11-03 10:12:18.222588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:50.116 [2024-11-03 10:12:18.222595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:50.116 [2024-11-03 10:12:18.222603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.117 [2024-11-03 10:12:18.222613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:50.117 [2024-11-03 10:12:18.222621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:50.117 [2024-11-03 10:12:18.222628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.117 [2024-11-03 10:12:18.222636] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.117 [2024-11-03 10:12:18.222645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.117 [2024-11-03 10:12:18.222654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.117 [2024-11-03 10:12:18.222664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.117 [2024-11-03 10:12:18.222672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.117 [2024-11-03 10:12:18.222681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.117 [2024-11-03 10:12:18.222688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.117 [2024-11-03 10:12:18.222696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.117 [2024-11-03 10:12:18.222703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.117 [2024-11-03 10:12:18.222709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.117 [2024-11-03 10:12:18.222718] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.117 [2024-11-03 10:12:18.222727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.117 [2024-11-03 10:12:18.222736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:50.117 [2024-11-03 10:12:18.222745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:50.117 [2024-11-03 10:12:18.222752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:50.117 [2024-11-03 10:12:18.222760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:50.117 [2024-11-03 10:12:18.222767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:50.117 [2024-11-03 10:12:18.222774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:50.117 [2024-11-03 10:12:18.222789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:50.117 [2024-11-03 10:12:18.222803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:50.117 [2024-11-03 10:12:18.222810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:50.117 [2024-11-03 10:12:18.222816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:50.117 [2024-11-03 10:12:18.222823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:50.117 [2024-11-03 10:12:18.222831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:50.117 [2024-11-03 10:12:18.222838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:50.117 [2024-11-03 10:12:18.222845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:50.117 [2024-11-03 10:12:18.222852] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.117 [2024-11-03 10:12:18.222860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.117 [2024-11-03 10:12:18.222868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.117 [2024-11-03 10:12:18.222878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.117 [2024-11-03 10:12:18.222886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.117 [2024-11-03 10:12:18.222894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.117 [2024-11-03 10:12:18.222902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.222910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.117 [2024-11-03 10:12:18.222920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:17:50.117 [2024-11-03 10:12:18.222928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.245447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.245517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.117 [2024-11-03 10:12:18.245538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.460 ms 00:17:50.117 [2024-11-03 10:12:18.245557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.245746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.245762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:50.117 [2024-11-03 10:12:18.245775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:50.117 [2024-11-03 10:12:18.245797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.257995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.258049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.117 [2024-11-03 10:12:18.258061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:17:50.117 [2024-11-03 10:12:18.258076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.258157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.258168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.117 [2024-11-03 10:12:18.258180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:50.117 [2024-11-03 10:12:18.258189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.258775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.258819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.117 [2024-11-03 10:12:18.258831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:17:50.117 [2024-11-03 10:12:18.258840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.259012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.259023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.117 [2024-11-03 10:12:18.259033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:17:50.117 [2024-11-03 10:12:18.259046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.266701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.266748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.117 [2024-11-03 10:12:18.266758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.627 ms 00:17:50.117 [2024-11-03 10:12:18.266766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.270734] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:50.117 [2024-11-03 10:12:18.270797] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:50.117 [2024-11-03 10:12:18.270809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.270818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:50.117 [2024-11-03 10:12:18.270827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:17:50.117 [2024-11-03 10:12:18.270835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.287154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.287219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:50.117 [2024-11-03 10:12:18.287240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.233 ms 00:17:50.117 [2024-11-03 10:12:18.287249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.290186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.290253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:50.117 [2024-11-03 10:12:18.290264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.840 ms 00:17:50.117 [2024-11-03 10:12:18.290271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.292870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.292923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:50.117 [2024-11-03 10:12:18.292944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:17:50.117 [2024-11-03 10:12:18.292951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.293381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.293396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:50.117 [2024-11-03 10:12:18.293409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:17:50.117 [2024-11-03 10:12:18.293417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.319588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.319645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:50.117 [2024-11-03 10:12:18.319659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.146 ms 00:17:50.117 [2024-11-03 10:12:18.319667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.327919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:50.117 [2024-11-03 10:12:18.346794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.346843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:50.117 [2024-11-03 10:12:18.346857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.037 ms 00:17:50.117 [2024-11-03 10:12:18.346867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.117 [2024-11-03 10:12:18.346961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.117 [2024-11-03 10:12:18.346971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:50.117 [2024-11-03 10:12:18.346982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:50.118 [2024-11-03 10:12:18.346997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.118 [2024-11-03 10:12:18.347057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.118 [2024-11-03 10:12:18.347067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:50.118 [2024-11-03 10:12:18.347076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:50.118 [2024-11-03 10:12:18.347084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.118 [2024-11-03 10:12:18.347109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.118 [2024-11-03 10:12:18.347117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:50.118 [2024-11-03 10:12:18.347126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:50.118 [2024-11-03 10:12:18.347138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.118 [2024-11-03 10:12:18.347175] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:50.118 [2024-11-03 10:12:18.347185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.118 [2024-11-03 10:12:18.347199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:50.118 [2024-11-03 10:12:18.347207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:50.118 [2024-11-03 10:12:18.347215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.118 [2024-11-03 10:12:18.352931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.118 [2024-11-03 10:12:18.352981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:50.118 [2024-11-03 10:12:18.352992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.669 ms 00:17:50.118 [2024-11-03 10:12:18.353000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.118 [2024-11-03 10:12:18.353095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.118 [2024-11-03 10:12:18.353108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:50.118 [2024-11-03 10:12:18.353117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:50.118 [2024-11-03 10:12:18.353125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.118 [2024-11-03 10:12:18.354108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.118 [2024-11-03 10:12:18.355410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.677 ms, result 0 00:17:50.118 [2024-11-03 10:12:18.356781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.118 [2024-11-03 10:12:18.364056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.061  [2024-11-03T10:12:20.810Z] Copying: 19/256 [MB] (19 MBps) [2024-11-03T10:12:21.754Z] Copying: 32/256 [MB] (12 MBps) [2024-11-03T10:12:22.697Z] Copying: 44/256 [MB] (12 MBps) [2024-11-03T10:12:23.641Z] Copying: 56/256 [MB] (11 MBps) [2024-11-03T10:12:24.584Z] Copying: 73/256 [MB] (16 MBps) [2024-11-03T10:12:25.526Z] Copying: 83/256 [MB] (10 MBps) [2024-11-03T10:12:26.469Z] Copying: 95/256 [MB] (12 MBps) [2024-11-03T10:12:27.853Z] Copying: 112/256 [MB] (16 MBps) [2024-11-03T10:12:28.425Z] Copying: 123/256 [MB] (10 MBps) [2024-11-03T10:12:29.814Z] Copying: 136/256 [MB] (12 MBps) [2024-11-03T10:12:30.759Z] Copying: 150/256 [MB] (14 MBps) [2024-11-03T10:12:31.757Z] Copying: 163/256 [MB] (13 MBps) [2024-11-03T10:12:32.701Z] Copying: 176/256 [MB] (13 MBps) [2024-11-03T10:12:33.644Z] Copying: 189/256 [MB] (12 MBps) [2024-11-03T10:12:34.587Z] Copying: 208/256 [MB] (18 MBps) [2024-11-03T10:12:35.529Z] Copying: 229/256 [MB] (20 MBps) [2024-11-03T10:12:36.102Z] Copying: 241/256 [MB] (12 MBps) [2024-11-03T10:12:36.363Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-03 10:12:36.328057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:08.001 [2024-11-03 10:12:36.330380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.330451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:08.001 [2024-11-03 10:12:36.330482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:08.001 [2024-11-03 10:12:36.330502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.330542] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:08.001 [2024-11-03 10:12:36.331787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.331849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:08.001 [2024-11-03 10:12:36.331866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:18:08.001 [2024-11-03 10:12:36.331880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.332438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.332479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:08.001 [2024-11-03 10:12:36.332497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:18:08.001 [2024-11-03 10:12:36.332510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.336330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.336360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:08.001 [2024-11-03 10:12:36.336370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.793 ms 00:18:08.001 [2024-11-03 10:12:36.336379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.343415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.343474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:08.001 [2024-11-03 10:12:36.343486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.008 ms 00:18:08.001 [2024-11-03 10:12:36.343499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.346613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.346673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:08.001 [2024-11-03 10:12:36.346683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:18:08.001 [2024-11-03 10:12:36.346692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.351883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.351944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:08.001 [2024-11-03 10:12:36.351965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.135 ms 00:18:08.001 [2024-11-03 10:12:36.351973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.352130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.352142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:08.001 [2024-11-03 10:12:36.352152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:08.001 [2024-11-03 10:12:36.352161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.355834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.355891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:08.001 [2024-11-03 10:12:36.355901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:18:08.001 [2024-11-03 10:12:36.355909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.358676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.358733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:08.001 [2024-11-03 10:12:36.358743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.717 ms 00:18:08.001 [2024-11-03 10:12:36.358751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.001 [2024-11-03 10:12:36.361424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.001 [2024-11-03 10:12:36.361476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:08.002 [2024-11-03 10:12:36.361487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:18:08.002 [2024-11-03 10:12:36.361495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.264 [2024-11-03 10:12:36.363943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.264 [2024-11-03 10:12:36.363994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:08.264 [2024-11-03 10:12:36.364004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:18:08.264 [2024-11-03 10:12:36.364011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.264 [2024-11-03 10:12:36.364073] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:08.264 [2024-11-03 10:12:36.364111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:08.264 [2024-11-03 10:12:36.364476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:08.265 [2024-11-03 10:12:36.364945] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:08.265 [2024-11-03 10:12:36.364953] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4dcf88cb-9c36-4fbc-b806-2f618a8df3cb 00:18:08.265 [2024-11-03 10:12:36.364967] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:08.265 [2024-11-03 10:12:36.364974] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:08.265 [2024-11-03 10:12:36.364981] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:08.265 [2024-11-03 10:12:36.364990] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:08.265 [2024-11-03 10:12:36.365004] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:08.265 [2024-11-03 10:12:36.365012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:08.265 [2024-11-03 10:12:36.365021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:08.265 [2024-11-03 10:12:36.365028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:08.265 [2024-11-03 10:12:36.365035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:08.265 [2024-11-03 10:12:36.365043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.265 [2024-11-03 10:12:36.365052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:08.265 [2024-11-03 10:12:36.365064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:18:08.265 [2024-11-03 10:12:36.365072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.265 [2024-11-03 10:12:36.367464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.265 [2024-11-03 10:12:36.367504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:08.265 [2024-11-03 10:12:36.367514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.371 ms 00:18:08.265 [2024-11-03 10:12:36.367523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.265 [2024-11-03 10:12:36.367651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.265 [2024-11-03 10:12:36.367667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:08.265 [2024-11-03 10:12:36.367677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:08.265 [2024-11-03 10:12:36.367684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.265 [2024-11-03 10:12:36.375145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.265 [2024-11-03 10:12:36.375200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.265 [2024-11-03 10:12:36.375212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.265 [2024-11-03 10:12:36.375220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.265 [2024-11-03 10:12:36.375341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.265 [2024-11-03 10:12:36.375355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.265 [2024-11-03 10:12:36.375369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.265 [2024-11-03 10:12:36.375377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.265 [2024-11-03 10:12:36.375428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.265 [2024-11-03 10:12:36.375438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.265 [2024-11-03 10:12:36.375447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.375455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.375475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.375484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.266 [2024-11-03 10:12:36.375495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.375504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.389371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.389425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.266 [2024-11-03 10:12:36.389445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.389454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.399688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.399744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.266 [2024-11-03 10:12:36.399755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.399764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.399809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.399819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.266 [2024-11-03 10:12:36.399827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.399842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.399873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.399883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.266 [2024-11-03 10:12:36.399891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.399902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.399969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.399979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.266 [2024-11-03 10:12:36.399988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.399996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.400032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.400042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:08.266 [2024-11-03 10:12:36.400050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.400058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.400125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.400136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.266 [2024-11-03 10:12:36.400144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.400151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.400199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.266 [2024-11-03 10:12:36.400210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.266 [2024-11-03 10:12:36.400218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.266 [2024-11-03 10:12:36.400250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.266 [2024-11-03 10:12:36.400399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.005 ms, result 0 00:18:08.266 00:18:08.266 00:18:08.266 10:12:36 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:08.836 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:08.836 10:12:37 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:08.836 10:12:37 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:08.836 10:12:37 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:08.836 10:12:37 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:08.836 10:12:37 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:09.097 10:12:37 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:09.097 10:12:37 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85752 00:18:09.097 10:12:37 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85752 ']' 00:18:09.097 Process with pid 85752 is not found 00:18:09.097 10:12:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85752 00:18:09.097 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85752) - No such process 00:18:09.097 10:12:37 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85752 is not found' 00:18:09.097 00:18:09.097 real 1m12.877s 00:18:09.097 user 1m36.796s 00:18:09.097 sys 0m5.284s 00:18:09.097 10:12:37 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.097 ************************************ 00:18:09.097 END TEST ftl_trim 00:18:09.097 ************************************ 00:18:09.097 10:12:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:09.097 10:12:37 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:09.097 10:12:37 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:09.097 10:12:37 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.097 10:12:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:09.098 ************************************ 00:18:09.098 START TEST ftl_restore 00:18:09.098 ************************************ 00:18:09.098 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:09.098 * Looking for test storage... 00:18:09.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:09.098 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:09.098 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:18:09.098 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:09.359 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.359 10:12:37 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:09.359 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.359 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:09.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.359 --rc genhtml_branch_coverage=1 00:18:09.359 --rc genhtml_function_coverage=1 00:18:09.359 --rc genhtml_legend=1 00:18:09.359 --rc geninfo_all_blocks=1 00:18:09.359 --rc geninfo_unexecuted_blocks=1 00:18:09.359 00:18:09.359 ' 00:18:09.359 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:09.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.359 --rc genhtml_branch_coverage=1 00:18:09.359 --rc genhtml_function_coverage=1 00:18:09.359 --rc genhtml_legend=1 00:18:09.359 --rc geninfo_all_blocks=1 00:18:09.359 --rc geninfo_unexecuted_blocks=1 00:18:09.359 00:18:09.359 ' 00:18:09.359 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:09.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.359 --rc genhtml_branch_coverage=1 00:18:09.359 --rc genhtml_function_coverage=1 00:18:09.359 --rc genhtml_legend=1 00:18:09.359 --rc geninfo_all_blocks=1 00:18:09.359 --rc geninfo_unexecuted_blocks=1 00:18:09.359 00:18:09.359 ' 00:18:09.359 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:09.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.359 --rc genhtml_branch_coverage=1 00:18:09.359 --rc genhtml_function_coverage=1 00:18:09.359 --rc genhtml_legend=1 00:18:09.359 --rc geninfo_all_blocks=1 00:18:09.359 --rc geninfo_unexecuted_blocks=1 00:18:09.359 00:18:09.359 ' 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:09.359 10:12:37 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.cvkS3srl8I 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:09.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86063 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86063 00:18:09.360 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86063 ']' 00:18:09.360 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.360 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.360 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.360 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.360 10:12:37 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:09.360 10:12:37 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:09.360 [2024-11-03 10:12:37.581389] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:09.360 [2024-11-03 10:12:37.581549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86063 ] 00:18:09.360 [2024-11-03 10:12:37.717855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.621 [2024-11-03 10:12:37.768036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.194 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.194 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:10.194 10:12:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:10.194 10:12:38 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:10.194 10:12:38 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:10.194 10:12:38 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:10.194 10:12:38 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:10.194 10:12:38 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:10.455 10:12:38 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:10.455 10:12:38 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:10.455 10:12:38 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:10.455 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:10.455 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:10.455 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:10.455 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:10.455 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:10.717 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:10.717 { 00:18:10.717 "name": "nvme0n1", 00:18:10.717 "aliases": [ 00:18:10.717 "3d04f778-1657-4b64-9d63-a3465e48efeb" 00:18:10.717 ], 00:18:10.717 "product_name": "NVMe disk", 00:18:10.717 "block_size": 4096, 00:18:10.717 "num_blocks": 1310720, 00:18:10.717 "uuid": "3d04f778-1657-4b64-9d63-a3465e48efeb", 00:18:10.717 "numa_id": -1, 00:18:10.717 "assigned_rate_limits": { 00:18:10.717 "rw_ios_per_sec": 0, 00:18:10.717 "rw_mbytes_per_sec": 0, 00:18:10.717 "r_mbytes_per_sec": 0, 00:18:10.717 "w_mbytes_per_sec": 0 00:18:10.717 }, 00:18:10.717 "claimed": true, 00:18:10.717 "claim_type": "read_many_write_one", 00:18:10.717 "zoned": false, 00:18:10.717 "supported_io_types": { 00:18:10.717 "read": true, 00:18:10.717 "write": true, 00:18:10.717 "unmap": true, 00:18:10.717 "flush": true, 00:18:10.717 "reset": true, 00:18:10.717 "nvme_admin": true, 00:18:10.717 "nvme_io": true, 00:18:10.717 "nvme_io_md": false, 00:18:10.717 "write_zeroes": true, 00:18:10.717 "zcopy": false, 00:18:10.717 "get_zone_info": false, 00:18:10.717 "zone_management": false, 00:18:10.717 "zone_append": false, 00:18:10.717 "compare": true, 00:18:10.717 "compare_and_write": false, 00:18:10.717 "abort": true, 00:18:10.717 "seek_hole": false, 00:18:10.717 "seek_data": false, 00:18:10.717 "copy": true, 00:18:10.717 "nvme_iov_md": false 00:18:10.717 }, 00:18:10.717 "driver_specific": { 00:18:10.717 "nvme": [ 00:18:10.717 { 00:18:10.717 "pci_address": "0000:00:11.0", 00:18:10.717 "trid": { 00:18:10.717 "trtype": "PCIe", 00:18:10.717 "traddr": "0000:00:11.0" 00:18:10.717 }, 00:18:10.717 "ctrlr_data": { 00:18:10.717 "cntlid": 0, 00:18:10.717 "vendor_id": "0x1b36", 00:18:10.717 "model_number": "QEMU NVMe Ctrl", 00:18:10.717 "serial_number": "12341", 00:18:10.717 "firmware_revision": "8.0.0", 00:18:10.717 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:10.717 "oacs": { 00:18:10.717 "security": 0, 00:18:10.717 "format": 1, 00:18:10.717 "firmware": 0, 00:18:10.717 "ns_manage": 1 00:18:10.717 }, 00:18:10.717 "multi_ctrlr": false, 00:18:10.717 "ana_reporting": false 00:18:10.717 }, 00:18:10.717 "vs": { 00:18:10.717 "nvme_version": "1.4" 00:18:10.717 }, 00:18:10.717 "ns_data": { 00:18:10.717 "id": 1, 00:18:10.717 "can_share": false 00:18:10.717 } 00:18:10.717 } 00:18:10.717 ], 00:18:10.717 "mp_policy": "active_passive" 00:18:10.717 } 00:18:10.717 } 00:18:10.717 ]' 00:18:10.717 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:10.717 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:10.717 10:12:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:10.717 10:12:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:10.717 10:12:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:10.717 10:12:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:10.717 10:12:39 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:10.717 10:12:39 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:10.717 10:12:39 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:10.717 10:12:39 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:10.717 10:12:39 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:10.977 10:12:39 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=187df106-2bef-4c42-b073-f97c92e02c6b 00:18:10.977 10:12:39 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:10.977 10:12:39 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 187df106-2bef-4c42-b073-f97c92e02c6b 00:18:11.238 10:12:39 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:11.498 10:12:39 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=4b7f602c-0722-42d6-8142-c838ed8c1562 00:18:11.498 10:12:39 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4b7f602c-0722-42d6-8142-c838ed8c1562 00:18:11.759 10:12:39 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:11.760 10:12:39 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:11.760 10:12:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:11.760 10:12:39 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:11.760 10:12:39 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:11.760 10:12:39 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:11.760 10:12:39 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:11.760 10:12:39 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:11.760 10:12:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:11.760 10:12:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:11.760 10:12:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:11.760 10:12:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:11.760 10:12:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:11.760 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:11.760 { 00:18:11.760 "name": "ff3d4256-6eb5-4d46-80d3-6ef9867102e8", 00:18:11.760 "aliases": [ 00:18:11.760 "lvs/nvme0n1p0" 00:18:11.760 ], 00:18:11.760 "product_name": "Logical Volume", 00:18:11.760 "block_size": 4096, 00:18:11.760 "num_blocks": 26476544, 00:18:11.760 "uuid": "ff3d4256-6eb5-4d46-80d3-6ef9867102e8", 00:18:11.760 "assigned_rate_limits": { 00:18:11.760 "rw_ios_per_sec": 0, 00:18:11.760 "rw_mbytes_per_sec": 0, 00:18:11.760 "r_mbytes_per_sec": 0, 00:18:11.760 "w_mbytes_per_sec": 0 00:18:11.760 }, 00:18:11.760 "claimed": false, 00:18:11.760 "zoned": false, 00:18:11.760 "supported_io_types": { 00:18:11.760 "read": true, 00:18:11.760 "write": true, 00:18:11.760 "unmap": true, 00:18:11.760 "flush": false, 00:18:11.760 "reset": true, 00:18:11.760 "nvme_admin": false, 00:18:11.760 "nvme_io": false, 00:18:11.760 "nvme_io_md": false, 00:18:11.760 "write_zeroes": true, 00:18:11.760 "zcopy": false, 00:18:11.760 "get_zone_info": false, 00:18:11.760 "zone_management": false, 00:18:11.760 "zone_append": false, 00:18:11.760 "compare": false, 00:18:11.760 "compare_and_write": false, 00:18:11.760 "abort": false, 00:18:11.760 "seek_hole": true, 00:18:11.760 "seek_data": true, 00:18:11.760 "copy": false, 00:18:11.760 "nvme_iov_md": false 00:18:11.760 }, 00:18:11.760 "driver_specific": { 00:18:11.760 "lvol": { 00:18:11.760 "lvol_store_uuid": "4b7f602c-0722-42d6-8142-c838ed8c1562", 00:18:11.760 "base_bdev": "nvme0n1", 00:18:11.760 "thin_provision": true, 00:18:11.760 "num_allocated_clusters": 0, 00:18:11.760 "snapshot": false, 00:18:11.760 "clone": false, 00:18:11.760 "esnap_clone": false 00:18:11.760 } 00:18:11.760 } 00:18:11.760 } 00:18:11.760 ]' 00:18:11.760 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:12.021 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:12.021 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:12.021 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:12.021 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:12.021 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:12.021 10:12:40 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:12.021 10:12:40 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:12.021 10:12:40 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:12.282 10:12:40 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:12.282 10:12:40 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:12.282 10:12:40 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:12.282 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:12.282 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:12.282 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:12.282 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:12.282 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:12.282 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:12.282 { 00:18:12.282 "name": "ff3d4256-6eb5-4d46-80d3-6ef9867102e8", 00:18:12.282 "aliases": [ 00:18:12.282 "lvs/nvme0n1p0" 00:18:12.282 ], 00:18:12.282 "product_name": "Logical Volume", 00:18:12.282 "block_size": 4096, 00:18:12.282 "num_blocks": 26476544, 00:18:12.282 "uuid": "ff3d4256-6eb5-4d46-80d3-6ef9867102e8", 00:18:12.282 "assigned_rate_limits": { 00:18:12.282 "rw_ios_per_sec": 0, 00:18:12.282 "rw_mbytes_per_sec": 0, 00:18:12.282 "r_mbytes_per_sec": 0, 00:18:12.282 "w_mbytes_per_sec": 0 00:18:12.282 }, 00:18:12.282 "claimed": false, 00:18:12.282 "zoned": false, 00:18:12.282 "supported_io_types": { 00:18:12.282 "read": true, 00:18:12.282 "write": true, 00:18:12.282 "unmap": true, 00:18:12.282 "flush": false, 00:18:12.282 "reset": true, 00:18:12.282 "nvme_admin": false, 00:18:12.282 "nvme_io": false, 00:18:12.282 "nvme_io_md": false, 00:18:12.282 "write_zeroes": true, 00:18:12.282 "zcopy": false, 00:18:12.282 "get_zone_info": false, 00:18:12.282 "zone_management": false, 00:18:12.282 "zone_append": false, 00:18:12.282 "compare": false, 00:18:12.282 "compare_and_write": false, 00:18:12.282 "abort": false, 00:18:12.282 "seek_hole": true, 00:18:12.282 "seek_data": true, 00:18:12.282 "copy": false, 00:18:12.282 "nvme_iov_md": false 00:18:12.282 }, 00:18:12.282 "driver_specific": { 00:18:12.282 "lvol": { 00:18:12.282 "lvol_store_uuid": "4b7f602c-0722-42d6-8142-c838ed8c1562", 00:18:12.282 "base_bdev": "nvme0n1", 00:18:12.282 "thin_provision": true, 00:18:12.282 "num_allocated_clusters": 0, 00:18:12.282 "snapshot": false, 00:18:12.282 "clone": false, 00:18:12.282 "esnap_clone": false 00:18:12.282 } 00:18:12.282 } 00:18:12.282 } 00:18:12.282 ]' 00:18:12.282 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:12.543 10:12:40 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:12.543 10:12:40 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:12.543 10:12:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:12.543 10:12:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:12.543 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:12.804 10:12:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff3d4256-6eb5-4d46-80d3-6ef9867102e8 00:18:12.804 10:12:41 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:12.804 { 00:18:12.804 "name": "ff3d4256-6eb5-4d46-80d3-6ef9867102e8", 00:18:12.804 "aliases": [ 00:18:12.804 "lvs/nvme0n1p0" 00:18:12.804 ], 00:18:12.804 "product_name": "Logical Volume", 00:18:12.804 "block_size": 4096, 00:18:12.804 "num_blocks": 26476544, 00:18:12.804 "uuid": "ff3d4256-6eb5-4d46-80d3-6ef9867102e8", 00:18:12.804 "assigned_rate_limits": { 00:18:12.804 "rw_ios_per_sec": 0, 00:18:12.804 "rw_mbytes_per_sec": 0, 00:18:12.804 "r_mbytes_per_sec": 0, 00:18:12.804 "w_mbytes_per_sec": 0 00:18:12.804 }, 00:18:12.804 "claimed": false, 00:18:12.804 "zoned": false, 00:18:12.804 "supported_io_types": { 00:18:12.804 "read": true, 00:18:12.804 "write": true, 00:18:12.804 "unmap": true, 00:18:12.804 "flush": false, 00:18:12.804 "reset": true, 00:18:12.804 "nvme_admin": false, 00:18:12.804 "nvme_io": false, 00:18:12.804 "nvme_io_md": false, 00:18:12.804 "write_zeroes": true, 00:18:12.804 "zcopy": false, 00:18:12.804 "get_zone_info": false, 00:18:12.804 "zone_management": false, 00:18:12.804 "zone_append": false, 00:18:12.804 "compare": false, 00:18:12.804 "compare_and_write": false, 00:18:12.804 "abort": false, 00:18:12.804 "seek_hole": true, 00:18:12.804 "seek_data": true, 00:18:12.804 "copy": false, 00:18:12.804 "nvme_iov_md": false 00:18:12.804 }, 00:18:12.804 "driver_specific": { 00:18:12.804 "lvol": { 00:18:12.804 "lvol_store_uuid": "4b7f602c-0722-42d6-8142-c838ed8c1562", 00:18:12.804 "base_bdev": "nvme0n1", 00:18:12.804 "thin_provision": true, 00:18:12.804 "num_allocated_clusters": 0, 00:18:12.804 "snapshot": false, 00:18:12.804 "clone": false, 00:18:12.804 "esnap_clone": false 00:18:12.804 } 00:18:12.804 } 00:18:12.804 } 00:18:12.804 ]' 00:18:12.804 10:12:41 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:12.804 10:12:41 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:12.804 10:12:41 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:12.805 10:12:41 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:12.805 10:12:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:12.805 10:12:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:12.805 10:12:41 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:12.805 10:12:41 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ff3d4256-6eb5-4d46-80d3-6ef9867102e8 --l2p_dram_limit 10' 00:18:12.805 10:12:41 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:12.805 10:12:41 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:12.805 10:12:41 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:12.805 10:12:41 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:12.805 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:12.805 10:12:41 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ff3d4256-6eb5-4d46-80d3-6ef9867102e8 --l2p_dram_limit 10 -c nvc0n1p0 00:18:13.066 [2024-11-03 10:12:41.345089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.066 [2024-11-03 10:12:41.345128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:13.066 [2024-11-03 10:12:41.345139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.066 [2024-11-03 10:12:41.345147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.066 [2024-11-03 10:12:41.345191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.066 [2024-11-03 10:12:41.345200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.066 [2024-11-03 10:12:41.345206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:13.066 [2024-11-03 10:12:41.345215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.345246] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:13.067 [2024-11-03 10:12:41.345492] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:13.067 [2024-11-03 10:12:41.345508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.345515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.067 [2024-11-03 10:12:41.345523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:18:13.067 [2024-11-03 10:12:41.345532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.345668] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7126a7fa-c0d4-4fad-a178-ffc386b8e8f1 00:18:13.067 [2024-11-03 10:12:41.346606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.346703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:13.067 [2024-11-03 10:12:41.346720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:13.067 [2024-11-03 10:12:41.346726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.351380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.351404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.067 [2024-11-03 10:12:41.351413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:18:13.067 [2024-11-03 10:12:41.351419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.351478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.351485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.067 [2024-11-03 10:12:41.351493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:13.067 [2024-11-03 10:12:41.351500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.351540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.351548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:13.067 [2024-11-03 10:12:41.351555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:13.067 [2024-11-03 10:12:41.351561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.351578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.067 [2024-11-03 10:12:41.352820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.352848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.067 [2024-11-03 10:12:41.352857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:18:13.067 [2024-11-03 10:12:41.352864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.352889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.352897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:13.067 [2024-11-03 10:12:41.352905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:13.067 [2024-11-03 10:12:41.352914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.352929] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:13.067 [2024-11-03 10:12:41.353044] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:13.067 [2024-11-03 10:12:41.353054] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:13.067 [2024-11-03 10:12:41.353063] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:13.067 [2024-11-03 10:12:41.353071] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353079] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353085] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:13.067 [2024-11-03 10:12:41.353094] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:13.067 [2024-11-03 10:12:41.353100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:13.067 [2024-11-03 10:12:41.353107] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:13.067 [2024-11-03 10:12:41.353114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.353121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:13.067 [2024-11-03 10:12:41.353127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:18:13.067 [2024-11-03 10:12:41.353134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.353197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.067 [2024-11-03 10:12:41.353206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:13.067 [2024-11-03 10:12:41.353212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:13.067 [2024-11-03 10:12:41.353218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.067 [2024-11-03 10:12:41.353306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:13.067 [2024-11-03 10:12:41.353315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:13.067 [2024-11-03 10:12:41.353322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:13.067 [2024-11-03 10:12:41.353342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:13.067 [2024-11-03 10:12:41.353359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.067 [2024-11-03 10:12:41.353371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:13.067 [2024-11-03 10:12:41.353378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:13.067 [2024-11-03 10:12:41.353383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.067 [2024-11-03 10:12:41.353392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:13.067 [2024-11-03 10:12:41.353397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:13.067 [2024-11-03 10:12:41.353403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:13.067 [2024-11-03 10:12:41.353415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:13.067 [2024-11-03 10:12:41.353431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:13.067 [2024-11-03 10:12:41.353450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:13.067 [2024-11-03 10:12:41.353466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:13.067 [2024-11-03 10:12:41.353488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:13.067 [2024-11-03 10:12:41.353505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.067 [2024-11-03 10:12:41.353516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:13.067 [2024-11-03 10:12:41.353522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:13.067 [2024-11-03 10:12:41.353528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.067 [2024-11-03 10:12:41.353535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:13.067 [2024-11-03 10:12:41.353540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:13.067 [2024-11-03 10:12:41.353546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:13.067 [2024-11-03 10:12:41.353557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:13.067 [2024-11-03 10:12:41.353562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353568] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:13.067 [2024-11-03 10:12:41.353578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:13.067 [2024-11-03 10:12:41.353586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.067 [2024-11-03 10:12:41.353591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.067 [2024-11-03 10:12:41.353601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:13.067 [2024-11-03 10:12:41.353606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:13.067 [2024-11-03 10:12:41.353612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:13.067 [2024-11-03 10:12:41.353617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:13.067 [2024-11-03 10:12:41.353625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:13.067 [2024-11-03 10:12:41.353630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:13.068 [2024-11-03 10:12:41.353639] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:13.068 [2024-11-03 10:12:41.353646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.068 [2024-11-03 10:12:41.353654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:13.068 [2024-11-03 10:12:41.353660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:13.068 [2024-11-03 10:12:41.353666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:13.068 [2024-11-03 10:12:41.353671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:13.068 [2024-11-03 10:12:41.353678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:13.068 [2024-11-03 10:12:41.353683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:13.068 [2024-11-03 10:12:41.353692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:13.068 [2024-11-03 10:12:41.353697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:13.068 [2024-11-03 10:12:41.353704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:13.068 [2024-11-03 10:12:41.353709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:13.068 [2024-11-03 10:12:41.353715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:13.068 [2024-11-03 10:12:41.353721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:13.068 [2024-11-03 10:12:41.353727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:13.068 [2024-11-03 10:12:41.353734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:13.068 [2024-11-03 10:12:41.353741] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:13.068 [2024-11-03 10:12:41.353749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.068 [2024-11-03 10:12:41.353756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:13.068 [2024-11-03 10:12:41.353762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:13.068 [2024-11-03 10:12:41.353769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:13.068 [2024-11-03 10:12:41.353775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:13.068 [2024-11-03 10:12:41.353783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.068 [2024-11-03 10:12:41.353789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:13.068 [2024-11-03 10:12:41.353797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:18:13.068 [2024-11-03 10:12:41.353802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.068 [2024-11-03 10:12:41.353833] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:13.068 [2024-11-03 10:12:41.353840] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:17.278 [2024-11-03 10:12:45.330014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.330381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:17.278 [2024-11-03 10:12:45.330489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3976.156 ms 00:18:17.278 [2024-11-03 10:12:45.330519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.344345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.344545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.278 [2024-11-03 10:12:45.344670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.694 ms 00:18:17.278 [2024-11-03 10:12:45.344697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.344845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.345110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:17.278 [2024-11-03 10:12:45.345144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:17.278 [2024-11-03 10:12:45.345164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.356839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.357030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.278 [2024-11-03 10:12:45.357104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.587 ms 00:18:17.278 [2024-11-03 10:12:45.357130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.357185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.357207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.278 [2024-11-03 10:12:45.357260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:17.278 [2024-11-03 10:12:45.357281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.357914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.358066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.278 [2024-11-03 10:12:45.358089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:18:17.278 [2024-11-03 10:12:45.358099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.358254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.358267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.278 [2024-11-03 10:12:45.358279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:17.278 [2024-11-03 10:12:45.358291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.381988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.382046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.278 [2024-11-03 10:12:45.382061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.667 ms 00:18:17.278 [2024-11-03 10:12:45.382075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.391851] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:17.278 [2024-11-03 10:12:45.395688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.278 [2024-11-03 10:12:45.395738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:17.278 [2024-11-03 10:12:45.395750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.483 ms 00:18:17.278 [2024-11-03 10:12:45.395760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.278 [2024-11-03 10:12:45.489801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.490034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:17.279 [2024-11-03 10:12:45.490057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.008 ms 00:18:17.279 [2024-11-03 10:12:45.490071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.490301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.490317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:17.279 [2024-11-03 10:12:45.490327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:18:17.279 [2024-11-03 10:12:45.490337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.496515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.496691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:17.279 [2024-11-03 10:12:45.496710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:18:17.279 [2024-11-03 10:12:45.496721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.501214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.501288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:17.279 [2024-11-03 10:12:45.501299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.448 ms 00:18:17.279 [2024-11-03 10:12:45.501310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.501673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.501686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:17.279 [2024-11-03 10:12:45.501696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:18:17.279 [2024-11-03 10:12:45.501708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.550047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.550109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:17.279 [2024-11-03 10:12:45.550122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.299 ms 00:18:17.279 [2024-11-03 10:12:45.550133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.557371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.557431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:17.279 [2024-11-03 10:12:45.557443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.173 ms 00:18:17.279 [2024-11-03 10:12:45.557453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.563436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.563617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:17.279 [2024-11-03 10:12:45.563635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.935 ms 00:18:17.279 [2024-11-03 10:12:45.563645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.569999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.570057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:17.279 [2024-11-03 10:12:45.570067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.311 ms 00:18:17.279 [2024-11-03 10:12:45.570080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.570133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.570150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:17.279 [2024-11-03 10:12:45.570159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:17.279 [2024-11-03 10:12:45.570190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.570322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.279 [2024-11-03 10:12:45.570336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:17.279 [2024-11-03 10:12:45.570345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:17.279 [2024-11-03 10:12:45.570354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.279 [2024-11-03 10:12:45.571621] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4226.021 ms, result 0 00:18:17.279 { 00:18:17.279 "name": "ftl0", 00:18:17.279 "uuid": "7126a7fa-c0d4-4fad-a178-ffc386b8e8f1" 00:18:17.279 } 00:18:17.279 10:12:45 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:17.279 10:12:45 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:17.540 10:12:45 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:17.540 10:12:45 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:17.803 [2024-11-03 10:12:46.018845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.019049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.803 [2024-11-03 10:12:46.019077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:17.803 [2024-11-03 10:12:46.019087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.019122] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:17.803 [2024-11-03 10:12:46.019892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.019938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.803 [2024-11-03 10:12:46.019949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:18:17.803 [2024-11-03 10:12:46.019960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.020257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.020271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.803 [2024-11-03 10:12:46.020280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:18:17.803 [2024-11-03 10:12:46.020291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.023527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.023555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.803 [2024-11-03 10:12:46.023565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:18:17.803 [2024-11-03 10:12:46.023575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.029805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.029987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.803 [2024-11-03 10:12:46.030007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:18:17.803 [2024-11-03 10:12:46.030017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.033047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.033101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.803 [2024-11-03 10:12:46.033111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:18:17.803 [2024-11-03 10:12:46.033125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.039714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.039901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.803 [2024-11-03 10:12:46.040017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.542 ms 00:18:17.803 [2024-11-03 10:12:46.040048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.040201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.040379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.803 [2024-11-03 10:12:46.040411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:17.803 [2024-11-03 10:12:46.040438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.043969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.044151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.803 [2024-11-03 10:12:46.044219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:18:17.803 [2024-11-03 10:12:46.044267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.047086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.047267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.803 [2024-11-03 10:12:46.047368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:18:17.803 [2024-11-03 10:12:46.047395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.049651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.049805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.803 [2024-11-03 10:12:46.049860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.205 ms 00:18:17.803 [2024-11-03 10:12:46.049885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.052160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.803 [2024-11-03 10:12:46.052329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.803 [2024-11-03 10:12:46.052396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:18:17.803 [2024-11-03 10:12:46.052421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.803 [2024-11-03 10:12:46.052473] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.804 [2024-11-03 10:12:46.052507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.052999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.053961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.054943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.804 [2024-11-03 10:12:46.055527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.805 [2024-11-03 10:12:46.055633] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.805 [2024-11-03 10:12:46.055641] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7126a7fa-c0d4-4fad-a178-ffc386b8e8f1 00:18:17.805 [2024-11-03 10:12:46.055652] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.805 [2024-11-03 10:12:46.055659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.805 [2024-11-03 10:12:46.055670] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.805 [2024-11-03 10:12:46.055677] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.805 [2024-11-03 10:12:46.055686] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.805 [2024-11-03 10:12:46.055694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.805 [2024-11-03 10:12:46.055710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.805 [2024-11-03 10:12:46.055716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.805 [2024-11-03 10:12:46.055725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.805 [2024-11-03 10:12:46.055733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.805 [2024-11-03 10:12:46.055743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.805 [2024-11-03 10:12:46.055756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.261 ms 00:18:17.805 [2024-11-03 10:12:46.055766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.058056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.805 [2024-11-03 10:12:46.058215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.805 [2024-11-03 10:12:46.058255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.265 ms 00:18:17.805 [2024-11-03 10:12:46.058267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.058419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.805 [2024-11-03 10:12:46.058432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.805 [2024-11-03 10:12:46.058441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:17.805 [2024-11-03 10:12:46.058450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.066458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.066512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.805 [2024-11-03 10:12:46.066530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.066542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.066608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.066621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.805 [2024-11-03 10:12:46.066628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.066638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.066715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.066731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.805 [2024-11-03 10:12:46.066738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.066748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.066767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.066780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.805 [2024-11-03 10:12:46.066788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.066803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.079953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.080011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.805 [2024-11-03 10:12:46.080022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.080033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.090717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.090774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.805 [2024-11-03 10:12:46.090785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.090798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.090872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.090886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.805 [2024-11-03 10:12:46.090895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.090905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.090974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.090987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.805 [2024-11-03 10:12:46.090995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.091007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.091078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.091090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.805 [2024-11-03 10:12:46.091099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.091109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.091144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.091156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.805 [2024-11-03 10:12:46.091164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.091177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.091219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.091258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.805 [2024-11-03 10:12:46.091267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.091276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.091325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.805 [2024-11-03 10:12:46.091338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.805 [2024-11-03 10:12:46.091347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.805 [2024-11-03 10:12:46.091361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.805 [2024-11-03 10:12:46.091508] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.622 ms, result 0 00:18:17.805 true 00:18:17.805 10:12:46 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86063 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86063 ']' 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86063 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86063 00:18:17.805 killing process with pid 86063 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86063' 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86063 00:18:17.805 10:12:46 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86063 00:18:23.132 10:12:51 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:27.338 262144+0 records in 00:18:27.338 262144+0 records out 00:18:27.338 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.98789 s, 269 MB/s 00:18:27.338 10:12:55 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:29.256 10:12:57 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:29.256 [2024-11-03 10:12:57.266399] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:29.256 [2024-11-03 10:12:57.266511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86282 ] 00:18:29.256 [2024-11-03 10:12:57.412800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.256 [2024-11-03 10:12:57.461765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.256 [2024-11-03 10:12:57.579288] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:29.256 [2024-11-03 10:12:57.579364] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:29.519 [2024-11-03 10:12:57.740536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.740791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:29.519 [2024-11-03 10:12:57.740825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:29.519 [2024-11-03 10:12:57.740841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.740933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.740946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.519 [2024-11-03 10:12:57.740958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:29.519 [2024-11-03 10:12:57.740976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.741002] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:29.519 [2024-11-03 10:12:57.741337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:29.519 [2024-11-03 10:12:57.741359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.741369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.519 [2024-11-03 10:12:57.741385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:18:29.519 [2024-11-03 10:12:57.741395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.743692] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:29.519 [2024-11-03 10:12:57.748065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.748314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:29.519 [2024-11-03 10:12:57.748347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.375 ms 00:18:29.519 [2024-11-03 10:12:57.748356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.748444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.748463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:29.519 [2024-11-03 10:12:57.748479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:29.519 [2024-11-03 10:12:57.748488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.760210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.760275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.519 [2024-11-03 10:12:57.760288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.671 ms 00:18:29.519 [2024-11-03 10:12:57.760297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.760409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.760420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.519 [2024-11-03 10:12:57.760429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:29.519 [2024-11-03 10:12:57.760438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.760510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.760528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:29.519 [2024-11-03 10:12:57.760543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:29.519 [2024-11-03 10:12:57.760555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.760587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:29.519 [2024-11-03 10:12:57.763292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.763495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.519 [2024-11-03 10:12:57.763514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:18:29.519 [2024-11-03 10:12:57.763523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.763571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.519 [2024-11-03 10:12:57.763581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:29.519 [2024-11-03 10:12:57.763592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:29.519 [2024-11-03 10:12:57.763609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.519 [2024-11-03 10:12:57.763637] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:29.519 [2024-11-03 10:12:57.763670] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:29.519 [2024-11-03 10:12:57.763720] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:29.519 [2024-11-03 10:12:57.763739] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:29.519 [2024-11-03 10:12:57.763853] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:29.520 [2024-11-03 10:12:57.763866] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:29.520 [2024-11-03 10:12:57.763878] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:29.520 [2024-11-03 10:12:57.763890] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:29.520 [2024-11-03 10:12:57.763903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:29.520 [2024-11-03 10:12:57.763914] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:29.520 [2024-11-03 10:12:57.763927] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:29.520 [2024-11-03 10:12:57.763935] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:29.520 [2024-11-03 10:12:57.763945] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:29.520 [2024-11-03 10:12:57.763961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.520 [2024-11-03 10:12:57.763969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:29.520 [2024-11-03 10:12:57.763978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:18:29.520 [2024-11-03 10:12:57.763986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.520 [2024-11-03 10:12:57.764076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.520 [2024-11-03 10:12:57.764091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:29.520 [2024-11-03 10:12:57.764122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:29.520 [2024-11-03 10:12:57.764131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.520 [2024-11-03 10:12:57.764261] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:29.520 [2024-11-03 10:12:57.764275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:29.520 [2024-11-03 10:12:57.764287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:29.520 [2024-11-03 10:12:57.764315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:29.520 [2024-11-03 10:12:57.764345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.520 [2024-11-03 10:12:57.764363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:29.520 [2024-11-03 10:12:57.764372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:29.520 [2024-11-03 10:12:57.764385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.520 [2024-11-03 10:12:57.764394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:29.520 [2024-11-03 10:12:57.764401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:29.520 [2024-11-03 10:12:57.764408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:29.520 [2024-11-03 10:12:57.764425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:29.520 [2024-11-03 10:12:57.764447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:29.520 [2024-11-03 10:12:57.764469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:29.520 [2024-11-03 10:12:57.764492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:29.520 [2024-11-03 10:12:57.764518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:29.520 [2024-11-03 10:12:57.764540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.520 [2024-11-03 10:12:57.764554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:29.520 [2024-11-03 10:12:57.764561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:29.520 [2024-11-03 10:12:57.764567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.520 [2024-11-03 10:12:57.764575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:29.520 [2024-11-03 10:12:57.764583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:29.520 [2024-11-03 10:12:57.764590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:29.520 [2024-11-03 10:12:57.764603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:29.520 [2024-11-03 10:12:57.764610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:29.520 [2024-11-03 10:12:57.764640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:29.520 [2024-11-03 10:12:57.764648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.520 [2024-11-03 10:12:57.764667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:29.520 [2024-11-03 10:12:57.764674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:29.520 [2024-11-03 10:12:57.764684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:29.520 [2024-11-03 10:12:57.764691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:29.520 [2024-11-03 10:12:57.764698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:29.520 [2024-11-03 10:12:57.764705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:29.520 [2024-11-03 10:12:57.764714] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:29.520 [2024-11-03 10:12:57.764725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.520 [2024-11-03 10:12:57.764735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:29.520 [2024-11-03 10:12:57.764745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:29.520 [2024-11-03 10:12:57.764753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:29.520 [2024-11-03 10:12:57.764760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:29.520 [2024-11-03 10:12:57.764769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:29.520 [2024-11-03 10:12:57.764779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:29.520 [2024-11-03 10:12:57.764786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:29.520 [2024-11-03 10:12:57.764794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:29.520 [2024-11-03 10:12:57.764801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:29.520 [2024-11-03 10:12:57.764816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:29.520 [2024-11-03 10:12:57.764824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:29.520 [2024-11-03 10:12:57.764831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:29.520 [2024-11-03 10:12:57.764838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:29.520 [2024-11-03 10:12:57.764846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:29.520 [2024-11-03 10:12:57.764854] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:29.520 [2024-11-03 10:12:57.764863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.520 [2024-11-03 10:12:57.764873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:29.520 [2024-11-03 10:12:57.764880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:29.520 [2024-11-03 10:12:57.764888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:29.520 [2024-11-03 10:12:57.764896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:29.520 [2024-11-03 10:12:57.764905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.520 [2024-11-03 10:12:57.764916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:29.520 [2024-11-03 10:12:57.764923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:18:29.520 [2024-11-03 10:12:57.764931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.520 [2024-11-03 10:12:57.792165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.520 [2024-11-03 10:12:57.792250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.520 [2024-11-03 10:12:57.792269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.161 ms 00:18:29.521 [2024-11-03 10:12:57.792279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.792382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.792397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:29.521 [2024-11-03 10:12:57.792432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:29.521 [2024-11-03 10:12:57.792441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.808338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.808390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.521 [2024-11-03 10:12:57.808403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.825 ms 00:18:29.521 [2024-11-03 10:12:57.808413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.808454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.808465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.521 [2024-11-03 10:12:57.808476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:29.521 [2024-11-03 10:12:57.808485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.809199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.809272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.521 [2024-11-03 10:12:57.809284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:18:29.521 [2024-11-03 10:12:57.809297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.809470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.809483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.521 [2024-11-03 10:12:57.809492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:29.521 [2024-11-03 10:12:57.809503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.818929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.818978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.521 [2024-11-03 10:12:57.819002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.395 ms 00:18:29.521 [2024-11-03 10:12:57.819015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.823340] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:29.521 [2024-11-03 10:12:57.823396] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:29.521 [2024-11-03 10:12:57.823411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.823422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:29.521 [2024-11-03 10:12:57.823431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.280 ms 00:18:29.521 [2024-11-03 10:12:57.823440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.839833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.839895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:29.521 [2024-11-03 10:12:57.839909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.336 ms 00:18:29.521 [2024-11-03 10:12:57.839921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.843060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.843308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:29.521 [2024-11-03 10:12:57.843329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:18:29.521 [2024-11-03 10:12:57.843338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.846072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.846121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:29.521 [2024-11-03 10:12:57.846133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.686 ms 00:18:29.521 [2024-11-03 10:12:57.846141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.846516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.846536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:29.521 [2024-11-03 10:12:57.846546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:29.521 [2024-11-03 10:12:57.846554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.521 [2024-11-03 10:12:57.876755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.521 [2024-11-03 10:12:57.876819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:29.521 [2024-11-03 10:12:57.876838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.180 ms 00:18:29.521 [2024-11-03 10:12:57.876853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.782 [2024-11-03 10:12:57.885627] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:29.782 [2024-11-03 10:12:57.889259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.782 [2024-11-03 10:12:57.889300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:29.782 [2024-11-03 10:12:57.889313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.347 ms 00:18:29.782 [2024-11-03 10:12:57.889335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.782 [2024-11-03 10:12:57.889416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.782 [2024-11-03 10:12:57.889429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:29.782 [2024-11-03 10:12:57.889439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:29.782 [2024-11-03 10:12:57.889450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.782 [2024-11-03 10:12:57.889539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.782 [2024-11-03 10:12:57.889550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:29.782 [2024-11-03 10:12:57.889560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:29.782 [2024-11-03 10:12:57.889569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.782 [2024-11-03 10:12:57.889601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.783 [2024-11-03 10:12:57.889620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:29.783 [2024-11-03 10:12:57.889633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:29.783 [2024-11-03 10:12:57.889641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.783 [2024-11-03 10:12:57.889686] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:29.783 [2024-11-03 10:12:57.889699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.783 [2024-11-03 10:12:57.889709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:29.783 [2024-11-03 10:12:57.889719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:29.783 [2024-11-03 10:12:57.889735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.783 [2024-11-03 10:12:57.896217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.783 [2024-11-03 10:12:57.896291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:29.783 [2024-11-03 10:12:57.896304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.461 ms 00:18:29.783 [2024-11-03 10:12:57.896314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.783 [2024-11-03 10:12:57.896413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.783 [2024-11-03 10:12:57.896426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:29.783 [2024-11-03 10:12:57.896436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:29.783 [2024-11-03 10:12:57.896447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.783 [2024-11-03 10:12:57.898263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.177 ms, result 0 00:18:30.727  [2024-11-03T10:13:00.031Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-03T10:13:00.975Z] Copying: 43/1024 [MB] (24 MBps) [2024-11-03T10:13:01.919Z] Copying: 56/1024 [MB] (12 MBps) [2024-11-03T10:13:03.306Z] Copying: 84/1024 [MB] (27 MBps) [2024-11-03T10:13:04.251Z] Copying: 103/1024 [MB] (19 MBps) [2024-11-03T10:13:05.195Z] Copying: 124/1024 [MB] (21 MBps) [2024-11-03T10:13:06.140Z] Copying: 142/1024 [MB] (17 MBps) [2024-11-03T10:13:07.083Z] Copying: 170/1024 [MB] (28 MBps) [2024-11-03T10:13:08.028Z] Copying: 203/1024 [MB] (32 MBps) [2024-11-03T10:13:08.972Z] Copying: 222/1024 [MB] (19 MBps) [2024-11-03T10:13:09.911Z] Copying: 242/1024 [MB] (19 MBps) [2024-11-03T10:13:11.322Z] Copying: 260/1024 [MB] (18 MBps) [2024-11-03T10:13:12.259Z] Copying: 278/1024 [MB] (17 MBps) [2024-11-03T10:13:13.200Z] Copying: 292/1024 [MB] (14 MBps) [2024-11-03T10:13:14.140Z] Copying: 310/1024 [MB] (18 MBps) [2024-11-03T10:13:15.078Z] Copying: 326/1024 [MB] (15 MBps) [2024-11-03T10:13:16.018Z] Copying: 340/1024 [MB] (13 MBps) [2024-11-03T10:13:16.958Z] Copying: 353/1024 [MB] (13 MBps) [2024-11-03T10:13:18.341Z] Copying: 373/1024 [MB] (20 MBps) [2024-11-03T10:13:18.911Z] Copying: 389/1024 [MB] (15 MBps) [2024-11-03T10:13:20.294Z] Copying: 407/1024 [MB] (18 MBps) [2024-11-03T10:13:21.235Z] Copying: 424/1024 [MB] (17 MBps) [2024-11-03T10:13:22.176Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-03T10:13:23.116Z] Copying: 446/1024 [MB] (10 MBps) [2024-11-03T10:13:24.058Z] Copying: 457/1024 [MB] (11 MBps) [2024-11-03T10:13:24.999Z] Copying: 470/1024 [MB] (13 MBps) [2024-11-03T10:13:25.941Z] Copying: 482/1024 [MB] (11 MBps) [2024-11-03T10:13:27.327Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-03T10:13:27.952Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-03T10:13:29.335Z] Copying: 514/1024 [MB] (10 MBps) [2024-11-03T10:13:30.273Z] Copying: 539/1024 [MB] (24 MBps) [2024-11-03T10:13:31.211Z] Copying: 570/1024 [MB] (31 MBps) [2024-11-03T10:13:32.150Z] Copying: 590/1024 [MB] (20 MBps) [2024-11-03T10:13:33.092Z] Copying: 605/1024 [MB] (15 MBps) [2024-11-03T10:13:34.032Z] Copying: 621/1024 [MB] (15 MBps) [2024-11-03T10:13:34.975Z] Copying: 637/1024 [MB] (15 MBps) [2024-11-03T10:13:35.915Z] Copying: 659/1024 [MB] (21 MBps) [2024-11-03T10:13:37.299Z] Copying: 677/1024 [MB] (17 MBps) [2024-11-03T10:13:38.241Z] Copying: 697/1024 [MB] (20 MBps) [2024-11-03T10:13:39.185Z] Copying: 717/1024 [MB] (20 MBps) [2024-11-03T10:13:40.143Z] Copying: 732/1024 [MB] (15 MBps) [2024-11-03T10:13:41.086Z] Copying: 748/1024 [MB] (16 MBps) [2024-11-03T10:13:42.030Z] Copying: 761/1024 [MB] (12 MBps) [2024-11-03T10:13:42.973Z] Copying: 773/1024 [MB] (11 MBps) [2024-11-03T10:13:43.917Z] Copying: 789/1024 [MB] (16 MBps) [2024-11-03T10:13:45.340Z] Copying: 804/1024 [MB] (14 MBps) [2024-11-03T10:13:45.912Z] Copying: 814/1024 [MB] (10 MBps) [2024-11-03T10:13:47.298Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-03T10:13:48.241Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-03T10:13:49.185Z] Copying: 845/1024 [MB] (10 MBps) [2024-11-03T10:13:50.129Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-03T10:13:51.073Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-03T10:13:52.017Z] Copying: 879/1024 [MB] (12 MBps) [2024-11-03T10:13:52.962Z] Copying: 910/1024 [MB] (31 MBps) [2024-11-03T10:13:54.349Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-03T10:13:54.922Z] Copying: 932/1024 [MB] (12 MBps) [2024-11-03T10:13:56.308Z] Copying: 971/1024 [MB] (38 MBps) [2024-11-03T10:13:57.253Z] Copying: 991/1024 [MB] (19 MBps) [2024-11-03T10:13:57.826Z] Copying: 1006/1024 [MB] (15 MBps) [2024-11-03T10:13:57.826Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-03 10:13:57.786888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.464 [2024-11-03 10:13:57.786952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:29.464 [2024-11-03 10:13:57.786968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:29.464 [2024-11-03 10:13:57.786977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.464 [2024-11-03 10:13:57.786999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:29.464 [2024-11-03 10:13:57.787837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.464 [2024-11-03 10:13:57.787872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:29.464 [2024-11-03 10:13:57.787885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:19:29.464 [2024-11-03 10:13:57.787894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.464 [2024-11-03 10:13:57.790151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.464 [2024-11-03 10:13:57.790354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:29.464 [2024-11-03 10:13:57.790375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:19:29.464 [2024-11-03 10:13:57.790384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.464 [2024-11-03 10:13:57.807328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.464 [2024-11-03 10:13:57.807382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:29.464 [2024-11-03 10:13:57.807403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.919 ms 00:19:29.464 [2024-11-03 10:13:57.807412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.464 [2024-11-03 10:13:57.813600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.464 [2024-11-03 10:13:57.813782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:29.464 [2024-11-03 10:13:57.813803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.144 ms 00:19:29.464 [2024-11-03 10:13:57.813811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.464 [2024-11-03 10:13:57.816430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.464 [2024-11-03 10:13:57.816481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:29.464 [2024-11-03 10:13:57.816491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.552 ms 00:19:29.464 [2024-11-03 10:13:57.816499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.464 [2024-11-03 10:13:57.821953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.464 [2024-11-03 10:13:57.822017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:29.464 [2024-11-03 10:13:57.822028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.407 ms 00:19:29.464 [2024-11-03 10:13:57.822036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.464 [2024-11-03 10:13:57.822163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.464 [2024-11-03 10:13:57.822173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:29.464 [2024-11-03 10:13:57.822182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:29.464 [2024-11-03 10:13:57.822190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.727 [2024-11-03 10:13:57.825632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.727 [2024-11-03 10:13:57.825684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:29.727 [2024-11-03 10:13:57.825695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.416 ms 00:19:29.727 [2024-11-03 10:13:57.825702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.727 [2024-11-03 10:13:57.828558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.727 [2024-11-03 10:13:57.828607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:29.727 [2024-11-03 10:13:57.828617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.811 ms 00:19:29.727 [2024-11-03 10:13:57.828626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.727 [2024-11-03 10:13:57.830984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.727 [2024-11-03 10:13:57.831034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:29.727 [2024-11-03 10:13:57.831044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.311 ms 00:19:29.727 [2024-11-03 10:13:57.831051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.727 [2024-11-03 10:13:57.833655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.727 [2024-11-03 10:13:57.833708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:29.727 [2024-11-03 10:13:57.833718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:19:29.727 [2024-11-03 10:13:57.833726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.727 [2024-11-03 10:13:57.833767] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:29.727 [2024-11-03 10:13:57.833783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:29.727 [2024-11-03 10:13:57.833982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.833990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.833998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:29.728 [2024-11-03 10:13:57.834629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:29.728 [2024-11-03 10:13:57.834637] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7126a7fa-c0d4-4fad-a178-ffc386b8e8f1 00:19:29.728 [2024-11-03 10:13:57.834646] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:29.728 [2024-11-03 10:13:57.834658] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:29.728 [2024-11-03 10:13:57.834664] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:29.728 [2024-11-03 10:13:57.834673] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:29.728 [2024-11-03 10:13:57.834681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:29.728 [2024-11-03 10:13:57.834690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:29.728 [2024-11-03 10:13:57.834697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:29.728 [2024-11-03 10:13:57.834703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:29.728 [2024-11-03 10:13:57.834709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:29.728 [2024-11-03 10:13:57.834717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.728 [2024-11-03 10:13:57.834724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:29.728 [2024-11-03 10:13:57.834738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:19:29.728 [2024-11-03 10:13:57.834753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.728 [2024-11-03 10:13:57.837244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.728 [2024-11-03 10:13:57.837274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:29.728 [2024-11-03 10:13:57.837294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:19:29.729 [2024-11-03 10:13:57.837304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.837431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.729 [2024-11-03 10:13:57.837441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:29.729 [2024-11-03 10:13:57.837457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:29.729 [2024-11-03 10:13:57.837470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.844565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.844624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.729 [2024-11-03 10:13:57.844635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.844643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.844705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.844714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.729 [2024-11-03 10:13:57.844726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.844738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.844808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.844817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.729 [2024-11-03 10:13:57.844826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.844834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.844870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.844879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.729 [2024-11-03 10:13:57.844886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.844897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.858827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.859019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.729 [2024-11-03 10:13:57.859038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.859046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.869337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.869493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.729 [2024-11-03 10:13:57.869509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.869526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.869575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.869591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.729 [2024-11-03 10:13:57.869603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.869612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.869647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.869657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.729 [2024-11-03 10:13:57.869665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.869673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.869747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.869756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.729 [2024-11-03 10:13:57.869765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.869772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.869802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.869811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.729 [2024-11-03 10:13:57.869819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.869827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.869871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.869881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.729 [2024-11-03 10:13:57.869889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.869897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.869945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.729 [2024-11-03 10:13:57.869955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.729 [2024-11-03 10:13:57.869964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.729 [2024-11-03 10:13:57.869972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.729 [2024-11-03 10:13:57.870110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.189 ms, result 0 00:19:29.991 00:19:29.991 00:19:29.991 10:13:58 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:30.252 [2024-11-03 10:13:58.401206] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:30.252 [2024-11-03 10:13:58.401361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86915 ] 00:19:30.252 [2024-11-03 10:13:58.537206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.252 [2024-11-03 10:13:58.588336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.513 [2024-11-03 10:13:58.699998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.513 [2024-11-03 10:13:58.700371] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.513 [2024-11-03 10:13:58.860906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.513 [2024-11-03 10:13:58.860971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.513 [2024-11-03 10:13:58.860990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.513 [2024-11-03 10:13:58.861003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.513 [2024-11-03 10:13:58.861059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.513 [2024-11-03 10:13:58.861070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.513 [2024-11-03 10:13:58.861080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:30.513 [2024-11-03 10:13:58.861094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.513 [2024-11-03 10:13:58.861117] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.513 [2024-11-03 10:13:58.861439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.513 [2024-11-03 10:13:58.861460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.513 [2024-11-03 10:13:58.861470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.513 [2024-11-03 10:13:58.861482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:19:30.513 [2024-11-03 10:13:58.861490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.513 [2024-11-03 10:13:58.863203] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.513 [2024-11-03 10:13:58.867028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.513 [2024-11-03 10:13:58.867085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.513 [2024-11-03 10:13:58.867105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.828 ms 00:19:30.513 [2024-11-03 10:13:58.867114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.513 [2024-11-03 10:13:58.867195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.513 [2024-11-03 10:13:58.867208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.513 [2024-11-03 10:13:58.867220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:30.513 [2024-11-03 10:13:58.867260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.776 [2024-11-03 10:13:58.875746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.776 [2024-11-03 10:13:58.875793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.776 [2024-11-03 10:13:58.875804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.439 ms 00:19:30.776 [2024-11-03 10:13:58.875812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.776 [2024-11-03 10:13:58.875917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.776 [2024-11-03 10:13:58.875927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.776 [2024-11-03 10:13:58.875935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:30.776 [2024-11-03 10:13:58.875944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.776 [2024-11-03 10:13:58.876011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.776 [2024-11-03 10:13:58.876023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.776 [2024-11-03 10:13:58.876031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:30.776 [2024-11-03 10:13:58.876039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.776 [2024-11-03 10:13:58.876061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.776 [2024-11-03 10:13:58.878365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.776 [2024-11-03 10:13:58.878405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.776 [2024-11-03 10:13:58.878416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.309 ms 00:19:30.776 [2024-11-03 10:13:58.878424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.776 [2024-11-03 10:13:58.878459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.776 [2024-11-03 10:13:58.878469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.776 [2024-11-03 10:13:58.878478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:30.776 [2024-11-03 10:13:58.878485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.776 [2024-11-03 10:13:58.878508] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.776 [2024-11-03 10:13:58.878537] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.776 [2024-11-03 10:13:58.878582] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.776 [2024-11-03 10:13:58.878602] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:30.776 [2024-11-03 10:13:58.878712] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.776 [2024-11-03 10:13:58.878724] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.776 [2024-11-03 10:13:58.878736] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:30.776 [2024-11-03 10:13:58.878747] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.776 [2024-11-03 10:13:58.878760] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.776 [2024-11-03 10:13:58.878769] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:30.776 [2024-11-03 10:13:58.878776] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.776 [2024-11-03 10:13:58.878783] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.776 [2024-11-03 10:13:58.878791] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.776 [2024-11-03 10:13:58.878799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.776 [2024-11-03 10:13:58.878806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.776 [2024-11-03 10:13:58.878814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:30.776 [2024-11-03 10:13:58.878822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.776 [2024-11-03 10:13:58.878907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.776 [2024-11-03 10:13:58.878916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.776 [2024-11-03 10:13:58.878927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:30.776 [2024-11-03 10:13:58.878935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.776 [2024-11-03 10:13:58.879033] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.776 [2024-11-03 10:13:58.879044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.776 [2024-11-03 10:13:58.879056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.776 [2024-11-03 10:13:58.879065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.776 [2024-11-03 10:13:58.879078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.776 [2024-11-03 10:13:58.879086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.776 [2024-11-03 10:13:58.879094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:30.776 [2024-11-03 10:13:58.879104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.776 [2024-11-03 10:13:58.879113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:30.776 [2024-11-03 10:13:58.879122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.776 [2024-11-03 10:13:58.879130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.776 [2024-11-03 10:13:58.879138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:30.776 [2024-11-03 10:13:58.879148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.776 [2024-11-03 10:13:58.879159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.776 [2024-11-03 10:13:58.879168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:30.776 [2024-11-03 10:13:58.879175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.776 [2024-11-03 10:13:58.879183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.776 [2024-11-03 10:13:58.879191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:30.776 [2024-11-03 10:13:58.879199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.776 [2024-11-03 10:13:58.879207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.777 [2024-11-03 10:13:58.879214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:30.777 [2024-11-03 10:13:58.879248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.777 [2024-11-03 10:13:58.879257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.777 [2024-11-03 10:13:58.879265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:30.777 [2024-11-03 10:13:58.879273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.777 [2024-11-03 10:13:58.879280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.777 [2024-11-03 10:13:58.879288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:30.777 [2024-11-03 10:13:58.879296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.777 [2024-11-03 10:13:58.879310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.777 [2024-11-03 10:13:58.879318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:30.777 [2024-11-03 10:13:58.879326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.777 [2024-11-03 10:13:58.879334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.777 [2024-11-03 10:13:58.879342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:30.777 [2024-11-03 10:13:58.879350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.777 [2024-11-03 10:13:58.879360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.777 [2024-11-03 10:13:58.879368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:30.777 [2024-11-03 10:13:58.879375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.777 [2024-11-03 10:13:58.879383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.777 [2024-11-03 10:13:58.879390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:30.777 [2024-11-03 10:13:58.879397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.777 [2024-11-03 10:13:58.879403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.777 [2024-11-03 10:13:58.879411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:30.777 [2024-11-03 10:13:58.879418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.777 [2024-11-03 10:13:58.879425] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.777 [2024-11-03 10:13:58.879442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.777 [2024-11-03 10:13:58.879450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.777 [2024-11-03 10:13:58.879460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.777 [2024-11-03 10:13:58.879469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.777 [2024-11-03 10:13:58.879476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.777 [2024-11-03 10:13:58.879483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.777 [2024-11-03 10:13:58.879490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.777 [2024-11-03 10:13:58.879497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.777 [2024-11-03 10:13:58.879504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.777 [2024-11-03 10:13:58.879512] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.777 [2024-11-03 10:13:58.879522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.777 [2024-11-03 10:13:58.879530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:30.777 [2024-11-03 10:13:58.879538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:30.777 [2024-11-03 10:13:58.879545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:30.777 [2024-11-03 10:13:58.879552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:30.777 [2024-11-03 10:13:58.879561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:30.777 [2024-11-03 10:13:58.879571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:30.777 [2024-11-03 10:13:58.879578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:30.777 [2024-11-03 10:13:58.879585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:30.777 [2024-11-03 10:13:58.879592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:30.777 [2024-11-03 10:13:58.879605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:30.777 [2024-11-03 10:13:58.879612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:30.777 [2024-11-03 10:13:58.879619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:30.777 [2024-11-03 10:13:58.879627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:30.777 [2024-11-03 10:13:58.879634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:30.777 [2024-11-03 10:13:58.879641] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.777 [2024-11-03 10:13:58.879649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.777 [2024-11-03 10:13:58.879661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.777 [2024-11-03 10:13:58.879668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.777 [2024-11-03 10:13:58.879676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.777 [2024-11-03 10:13:58.879684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.777 [2024-11-03 10:13:58.879693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.879704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.777 [2024-11-03 10:13:58.879713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:19:30.777 [2024-11-03 10:13:58.879721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.905367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.905443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.777 [2024-11-03 10:13:58.905466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.593 ms 00:19:30.777 [2024-11-03 10:13:58.905479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.905616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.905630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.777 [2024-11-03 10:13:58.905643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:30.777 [2024-11-03 10:13:58.905663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.918952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.919005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.777 [2024-11-03 10:13:58.919016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.193 ms 00:19:30.777 [2024-11-03 10:13:58.919024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.919065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.919073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.777 [2024-11-03 10:13:58.919082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:30.777 [2024-11-03 10:13:58.919090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.919699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.919749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.777 [2024-11-03 10:13:58.919767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:19:30.777 [2024-11-03 10:13:58.919778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.919932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.919950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.777 [2024-11-03 10:13:58.919961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:30.777 [2024-11-03 10:13:58.919970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.927304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.927349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.777 [2024-11-03 10:13:58.927366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.307 ms 00:19:30.777 [2024-11-03 10:13:58.927374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.931455] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:30.777 [2024-11-03 10:13:58.931515] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.777 [2024-11-03 10:13:58.931528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.931537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.777 [2024-11-03 10:13:58.931546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.054 ms 00:19:30.777 [2024-11-03 10:13:58.931554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.948034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.948094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.777 [2024-11-03 10:13:58.948123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.422 ms 00:19:30.777 [2024-11-03 10:13:58.948131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.777 [2024-11-03 10:13:58.951351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.777 [2024-11-03 10:13:58.951398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.777 [2024-11-03 10:13:58.951409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:19:30.777 [2024-11-03 10:13:58.951417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.954328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.954520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.778 [2024-11-03 10:13:58.954539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:19:30.778 [2024-11-03 10:13:58.954547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.954889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.954909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.778 [2024-11-03 10:13:58.954919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:30.778 [2024-11-03 10:13:58.954927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.980582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.980644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.778 [2024-11-03 10:13:58.980663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.637 ms 00:19:30.778 [2024-11-03 10:13:58.980671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.988944] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:30.778 [2024-11-03 10:13:58.991959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.991999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.778 [2024-11-03 10:13:58.992018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.237 ms 00:19:30.778 [2024-11-03 10:13:58.992027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.992103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.992127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.778 [2024-11-03 10:13:58.992137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:30.778 [2024-11-03 10:13:58.992145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.992216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.992244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.778 [2024-11-03 10:13:58.992255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:30.778 [2024-11-03 10:13:58.992266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.992301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.992311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.778 [2024-11-03 10:13:58.992320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:30.778 [2024-11-03 10:13:58.992328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.992364] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.778 [2024-11-03 10:13:58.992377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.992388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.778 [2024-11-03 10:13:58.992397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:30.778 [2024-11-03 10:13:58.992405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.998146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.998344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.778 [2024-11-03 10:13:58.998408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.718 ms 00:19:30.778 [2024-11-03 10:13:58.998443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.998534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.778 [2024-11-03 10:13:58.998560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.778 [2024-11-03 10:13:58.998635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:30.778 [2024-11-03 10:13:58.998658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.778 [2024-11-03 10:13:58.999884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.510 ms, result 0 00:19:32.170  [2024-11-03T10:14:01.477Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-03T10:14:02.451Z] Copying: 31/1024 [MB] (16 MBps) [2024-11-03T10:14:03.394Z] Copying: 44/1024 [MB] (13 MBps) [2024-11-03T10:14:04.337Z] Copying: 61/1024 [MB] (16 MBps) [2024-11-03T10:14:05.280Z] Copying: 75/1024 [MB] (14 MBps) [2024-11-03T10:14:06.223Z] Copying: 90/1024 [MB] (14 MBps) [2024-11-03T10:14:07.609Z] Copying: 106/1024 [MB] (16 MBps) [2024-11-03T10:14:08.182Z] Copying: 126/1024 [MB] (20 MBps) [2024-11-03T10:14:09.568Z] Copying: 143/1024 [MB] (16 MBps) [2024-11-03T10:14:10.531Z] Copying: 166/1024 [MB] (23 MBps) [2024-11-03T10:14:11.475Z] Copying: 185/1024 [MB] (18 MBps) [2024-11-03T10:14:12.418Z] Copying: 203/1024 [MB] (18 MBps) [2024-11-03T10:14:13.361Z] Copying: 225/1024 [MB] (22 MBps) [2024-11-03T10:14:14.305Z] Copying: 243/1024 [MB] (18 MBps) [2024-11-03T10:14:15.248Z] Copying: 257/1024 [MB] (13 MBps) [2024-11-03T10:14:16.193Z] Copying: 275/1024 [MB] (17 MBps) [2024-11-03T10:14:17.580Z] Copying: 298/1024 [MB] (22 MBps) [2024-11-03T10:14:18.520Z] Copying: 320/1024 [MB] (22 MBps) [2024-11-03T10:14:19.488Z] Copying: 337/1024 [MB] (17 MBps) [2024-11-03T10:14:20.469Z] Copying: 349/1024 [MB] (11 MBps) [2024-11-03T10:14:21.412Z] Copying: 360/1024 [MB] (10 MBps) [2024-11-03T10:14:22.354Z] Copying: 371/1024 [MB] (10 MBps) [2024-11-03T10:14:23.296Z] Copying: 386/1024 [MB] (15 MBps) [2024-11-03T10:14:24.239Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-03T10:14:25.182Z] Copying: 412/1024 [MB] (15 MBps) [2024-11-03T10:14:26.569Z] Copying: 423/1024 [MB] (11 MBps) [2024-11-03T10:14:27.512Z] Copying: 441/1024 [MB] (17 MBps) [2024-11-03T10:14:28.455Z] Copying: 458/1024 [MB] (17 MBps) [2024-11-03T10:14:29.397Z] Copying: 476/1024 [MB] (17 MBps) [2024-11-03T10:14:30.341Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-03T10:14:31.283Z] Copying: 497/1024 [MB] (10 MBps) [2024-11-03T10:14:32.227Z] Copying: 511/1024 [MB] (13 MBps) [2024-11-03T10:14:33.613Z] Copying: 522/1024 [MB] (11 MBps) [2024-11-03T10:14:34.185Z] Copying: 534/1024 [MB] (11 MBps) [2024-11-03T10:14:35.570Z] Copying: 551/1024 [MB] (17 MBps) [2024-11-03T10:14:36.514Z] Copying: 566/1024 [MB] (14 MBps) [2024-11-03T10:14:37.484Z] Copying: 584/1024 [MB] (18 MBps) [2024-11-03T10:14:38.426Z] Copying: 603/1024 [MB] (18 MBps) [2024-11-03T10:14:39.369Z] Copying: 623/1024 [MB] (19 MBps) [2024-11-03T10:14:40.311Z] Copying: 642/1024 [MB] (19 MBps) [2024-11-03T10:14:41.254Z] Copying: 662/1024 [MB] (19 MBps) [2024-11-03T10:14:42.196Z] Copying: 672/1024 [MB] (10 MBps) [2024-11-03T10:14:43.581Z] Copying: 682/1024 [MB] (10 MBps) [2024-11-03T10:14:44.524Z] Copying: 693/1024 [MB] (10 MBps) [2024-11-03T10:14:45.468Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-03T10:14:46.410Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-03T10:14:47.353Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-03T10:14:48.296Z] Copying: 735/1024 [MB] (10 MBps) [2024-11-03T10:14:49.239Z] Copying: 746/1024 [MB] (11 MBps) [2024-11-03T10:14:50.185Z] Copying: 758/1024 [MB] (11 MBps) [2024-11-03T10:14:51.571Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-03T10:14:52.515Z] Copying: 780/1024 [MB] (10 MBps) [2024-11-03T10:14:53.459Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-03T10:14:54.434Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-03T10:14:55.379Z] Copying: 812/1024 [MB] (10 MBps) [2024-11-03T10:14:56.323Z] Copying: 822/1024 [MB] (10 MBps) [2024-11-03T10:14:57.268Z] Copying: 836/1024 [MB] (13 MBps) [2024-11-03T10:14:58.214Z] Copying: 854/1024 [MB] (17 MBps) [2024-11-03T10:14:59.603Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-03T10:15:00.547Z] Copying: 880/1024 [MB] (15 MBps) [2024-11-03T10:15:01.492Z] Copying: 895/1024 [MB] (15 MBps) [2024-11-03T10:15:02.439Z] Copying: 909/1024 [MB] (14 MBps) [2024-11-03T10:15:03.384Z] Copying: 919/1024 [MB] (10 MBps) [2024-11-03T10:15:04.329Z] Copying: 934/1024 [MB] (15 MBps) [2024-11-03T10:15:05.273Z] Copying: 949/1024 [MB] (14 MBps) [2024-11-03T10:15:06.218Z] Copying: 967/1024 [MB] (17 MBps) [2024-11-03T10:15:07.608Z] Copying: 987/1024 [MB] (20 MBps) [2024-11-03T10:15:08.184Z] Copying: 1007/1024 [MB] (19 MBps) [2024-11-03T10:15:08.184Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 10:15:08.106722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.106825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.822 [2024-11-03 10:15:08.106848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.822 [2024-11-03 10:15:08.106860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.106900] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.822 [2024-11-03 10:15:08.107793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.107829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.822 [2024-11-03 10:15:08.107845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:20:39.822 [2024-11-03 10:15:08.107858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.108208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.108249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.822 [2024-11-03 10:15:08.108263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:39.822 [2024-11-03 10:15:08.108274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.113906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.114138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.822 [2024-11-03 10:15:08.114162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.612 ms 00:20:39.822 [2024-11-03 10:15:08.114176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.121273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.121423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.822 [2024-11-03 10:15:08.121489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.060 ms 00:20:39.822 [2024-11-03 10:15:08.121524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.124586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.124799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.822 [2024-11-03 10:15:08.124992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:20:39.822 [2024-11-03 10:15:08.125033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.130128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.130337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.822 [2024-11-03 10:15:08.130411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.033 ms 00:20:39.822 [2024-11-03 10:15:08.130435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.130566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.130806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.822 [2024-11-03 10:15:08.130851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:39.822 [2024-11-03 10:15:08.130872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.133752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.133923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.822 [2024-11-03 10:15:08.133977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:20:39.822 [2024-11-03 10:15:08.133998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.136918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.137085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.822 [2024-11-03 10:15:08.137140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:20:39.822 [2024-11-03 10:15:08.137162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.139466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.139613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.822 [2024-11-03 10:15:08.139668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:20:39.822 [2024-11-03 10:15:08.139688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.141883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.822 [2024-11-03 10:15:08.142041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.822 [2024-11-03 10:15:08.142095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:20:39.822 [2024-11-03 10:15:08.142116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.822 [2024-11-03 10:15:08.142297] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.822 [2024-11-03 10:15:08.142435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.142978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.143806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.822 [2024-11-03 10:15:08.144184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.823 [2024-11-03 10:15:08.144746] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.823 [2024-11-03 10:15:08.144760] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7126a7fa-c0d4-4fad-a178-ffc386b8e8f1 00:20:39.823 [2024-11-03 10:15:08.144768] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.823 [2024-11-03 10:15:08.144776] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.823 [2024-11-03 10:15:08.144784] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.823 [2024-11-03 10:15:08.144796] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.823 [2024-11-03 10:15:08.144804] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.823 [2024-11-03 10:15:08.144813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.823 [2024-11-03 10:15:08.144821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.823 [2024-11-03 10:15:08.144828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.823 [2024-11-03 10:15:08.144834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.823 [2024-11-03 10:15:08.144843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.823 [2024-11-03 10:15:08.144851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.823 [2024-11-03 10:15:08.144870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:20:39.823 [2024-11-03 10:15:08.144878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.823 [2024-11-03 10:15:08.147466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.823 [2024-11-03 10:15:08.147602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.823 [2024-11-03 10:15:08.147657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:20:39.823 [2024-11-03 10:15:08.147682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.823 [2024-11-03 10:15:08.147829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.823 [2024-11-03 10:15:08.147860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.823 [2024-11-03 10:15:08.147882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:20:39.824 [2024-11-03 10:15:08.147988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.824 [2024-11-03 10:15:08.154901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.824 [2024-11-03 10:15:08.155063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.824 [2024-11-03 10:15:08.155117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.824 [2024-11-03 10:15:08.155139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.824 [2024-11-03 10:15:08.155210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.824 [2024-11-03 10:15:08.155260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.824 [2024-11-03 10:15:08.155316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.824 [2024-11-03 10:15:08.155337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.824 [2024-11-03 10:15:08.155424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.824 [2024-11-03 10:15:08.155479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.824 [2024-11-03 10:15:08.155500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.824 [2024-11-03 10:15:08.155551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.824 [2024-11-03 10:15:08.155586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.824 [2024-11-03 10:15:08.155607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.824 [2024-11-03 10:15:08.155633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.824 [2024-11-03 10:15:08.155653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.824 [2024-11-03 10:15:08.169697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.824 [2024-11-03 10:15:08.169871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.824 [2024-11-03 10:15:08.169889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.824 [2024-11-03 10:15:08.169899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.086 [2024-11-03 10:15:08.180823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.086 [2024-11-03 10:15:08.180987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.086 [2024-11-03 10:15:08.181012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.086 [2024-11-03 10:15:08.181021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.086 [2024-11-03 10:15:08.181076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.086 [2024-11-03 10:15:08.181087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.086 [2024-11-03 10:15:08.181095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.086 [2024-11-03 10:15:08.181103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.086 [2024-11-03 10:15:08.181145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.086 [2024-11-03 10:15:08.181155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.086 [2024-11-03 10:15:08.181164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.086 [2024-11-03 10:15:08.181176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.086 [2024-11-03 10:15:08.181265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.086 [2024-11-03 10:15:08.181282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.086 [2024-11-03 10:15:08.181291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.086 [2024-11-03 10:15:08.181299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.086 [2024-11-03 10:15:08.181331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.086 [2024-11-03 10:15:08.181341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:40.086 [2024-11-03 10:15:08.181349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.086 [2024-11-03 10:15:08.181357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.086 [2024-11-03 10:15:08.181399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.086 [2024-11-03 10:15:08.181408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.086 [2024-11-03 10:15:08.181417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.086 [2024-11-03 10:15:08.181426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.086 [2024-11-03 10:15:08.181477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.086 [2024-11-03 10:15:08.181489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.086 [2024-11-03 10:15:08.181497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.086 [2024-11-03 10:15:08.181508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.086 [2024-11-03 10:15:08.181636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.893 ms, result 0 00:20:40.086 00:20:40.086 00:20:40.086 10:15:08 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:42.648 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:42.648 10:15:10 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:42.648 [2024-11-03 10:15:10.723088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:42.648 [2024-11-03 10:15:10.723253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87665 ] 00:20:42.648 [2024-11-03 10:15:10.860702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.648 [2024-11-03 10:15:10.909439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.911 [2024-11-03 10:15:11.019305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.911 [2024-11-03 10:15:11.019375] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.911 [2024-11-03 10:15:11.179672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.179731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.911 [2024-11-03 10:15:11.179749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.911 [2024-11-03 10:15:11.179758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.179821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.179832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.911 [2024-11-03 10:15:11.179848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:42.911 [2024-11-03 10:15:11.179865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.179890] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.911 [2024-11-03 10:15:11.180180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.911 [2024-11-03 10:15:11.180198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.180207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.911 [2024-11-03 10:15:11.180219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:42.911 [2024-11-03 10:15:11.180257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.182014] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:42.911 [2024-11-03 10:15:11.185884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.185936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:42.911 [2024-11-03 10:15:11.185948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.871 ms 00:20:42.911 [2024-11-03 10:15:11.185957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.186034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.186047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:42.911 [2024-11-03 10:15:11.186058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:42.911 [2024-11-03 10:15:11.186066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.194289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.194329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.911 [2024-11-03 10:15:11.194353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.175 ms 00:20:42.911 [2024-11-03 10:15:11.194362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.194472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.194483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.911 [2024-11-03 10:15:11.194496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:42.911 [2024-11-03 10:15:11.194507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.194570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.194580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.911 [2024-11-03 10:15:11.194589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:42.911 [2024-11-03 10:15:11.194597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.194619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.911 [2024-11-03 10:15:11.196642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.911 [2024-11-03 10:15:11.196685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.911 [2024-11-03 10:15:11.196696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:20:42.911 [2024-11-03 10:15:11.196710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.911 [2024-11-03 10:15:11.196744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.912 [2024-11-03 10:15:11.196752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.912 [2024-11-03 10:15:11.196761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:42.912 [2024-11-03 10:15:11.196769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.912 [2024-11-03 10:15:11.196789] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:42.912 [2024-11-03 10:15:11.196814] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:42.912 [2024-11-03 10:15:11.196856] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:42.912 [2024-11-03 10:15:11.196872] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:42.912 [2024-11-03 10:15:11.196975] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.912 [2024-11-03 10:15:11.196987] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.912 [2024-11-03 10:15:11.197001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:42.912 [2024-11-03 10:15:11.197015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197030] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197039] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:42.912 [2024-11-03 10:15:11.197047] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.912 [2024-11-03 10:15:11.197055] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.912 [2024-11-03 10:15:11.197063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.912 [2024-11-03 10:15:11.197071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.912 [2024-11-03 10:15:11.197078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.912 [2024-11-03 10:15:11.197086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:42.912 [2024-11-03 10:15:11.197094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.912 [2024-11-03 10:15:11.197178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.912 [2024-11-03 10:15:11.197187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.912 [2024-11-03 10:15:11.197197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:42.912 [2024-11-03 10:15:11.197204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.912 [2024-11-03 10:15:11.197324] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.912 [2024-11-03 10:15:11.197336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.912 [2024-11-03 10:15:11.197346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.912 [2024-11-03 10:15:11.197371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.912 [2024-11-03 10:15:11.197397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.912 [2024-11-03 10:15:11.197413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.912 [2024-11-03 10:15:11.197421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:42.912 [2024-11-03 10:15:11.197432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.912 [2024-11-03 10:15:11.197440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.912 [2024-11-03 10:15:11.197448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:42.912 [2024-11-03 10:15:11.197456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.912 [2024-11-03 10:15:11.197472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.912 [2024-11-03 10:15:11.197499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.912 [2024-11-03 10:15:11.197522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.912 [2024-11-03 10:15:11.197546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.912 [2024-11-03 10:15:11.197573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.912 [2024-11-03 10:15:11.197596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.912 [2024-11-03 10:15:11.197612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.912 [2024-11-03 10:15:11.197619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:42.912 [2024-11-03 10:15:11.197626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.912 [2024-11-03 10:15:11.197634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.912 [2024-11-03 10:15:11.197642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:42.912 [2024-11-03 10:15:11.197650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.912 [2024-11-03 10:15:11.197665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:42.912 [2024-11-03 10:15:11.197672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197679] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.912 [2024-11-03 10:15:11.197692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.912 [2024-11-03 10:15:11.197700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.912 [2024-11-03 10:15:11.197717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.912 [2024-11-03 10:15:11.197723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.912 [2024-11-03 10:15:11.197731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.912 [2024-11-03 10:15:11.197740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.912 [2024-11-03 10:15:11.197747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.912 [2024-11-03 10:15:11.197753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.912 [2024-11-03 10:15:11.197762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.912 [2024-11-03 10:15:11.197776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.912 [2024-11-03 10:15:11.197784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:42.912 [2024-11-03 10:15:11.197792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:42.912 [2024-11-03 10:15:11.197799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:42.912 [2024-11-03 10:15:11.197807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:42.912 [2024-11-03 10:15:11.197814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:42.912 [2024-11-03 10:15:11.197823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:42.912 [2024-11-03 10:15:11.197831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:42.912 [2024-11-03 10:15:11.197838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:42.912 [2024-11-03 10:15:11.197845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:42.912 [2024-11-03 10:15:11.197857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:42.912 [2024-11-03 10:15:11.197864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:42.913 [2024-11-03 10:15:11.197872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:42.913 [2024-11-03 10:15:11.197879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:42.913 [2024-11-03 10:15:11.197886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:42.913 [2024-11-03 10:15:11.197893] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.913 [2024-11-03 10:15:11.197901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.913 [2024-11-03 10:15:11.197910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.913 [2024-11-03 10:15:11.197918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.913 [2024-11-03 10:15:11.197925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.913 [2024-11-03 10:15:11.197932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.913 [2024-11-03 10:15:11.197940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.197950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.913 [2024-11-03 10:15:11.197958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:20:42.913 [2024-11-03 10:15:11.197966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.218741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.218940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.913 [2024-11-03 10:15:11.219384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.730 ms 00:20:42.913 [2024-11-03 10:15:11.219437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.219937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.220006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.913 [2024-11-03 10:15:11.220082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:42.913 [2024-11-03 10:15:11.220117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.231065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.231215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.913 [2024-11-03 10:15:11.231294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.848 ms 00:20:42.913 [2024-11-03 10:15:11.231318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.231364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.231386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.913 [2024-11-03 10:15:11.231407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:42.913 [2024-11-03 10:15:11.231426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.231949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.232015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.913 [2024-11-03 10:15:11.232039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:20:42.913 [2024-11-03 10:15:11.232059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.232240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.232266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.913 [2024-11-03 10:15:11.232362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:42.913 [2024-11-03 10:15:11.232387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.238519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.238650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.913 [2024-11-03 10:15:11.238711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:20:42.913 [2024-11-03 10:15:11.238734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.242316] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:42.913 [2024-11-03 10:15:11.242469] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:42.913 [2024-11-03 10:15:11.242537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.242558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:42.913 [2024-11-03 10:15:11.242578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:20:42.913 [2024-11-03 10:15:11.242597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.257911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.258062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:42.913 [2024-11-03 10:15:11.258123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.267 ms 00:20:42.913 [2024-11-03 10:15:11.258146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.261291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.261467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:42.913 [2024-11-03 10:15:11.261594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:20:42.913 [2024-11-03 10:15:11.261617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.264114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.264293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:42.913 [2024-11-03 10:15:11.264353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.447 ms 00:20:42.913 [2024-11-03 10:15:11.264376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.913 [2024-11-03 10:15:11.264712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.913 [2024-11-03 10:15:11.264849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.913 [2024-11-03 10:15:11.265247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:42.913 [2024-11-03 10:15:11.265301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.289511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.258 [2024-11-03 10:15:11.289732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:43.258 [2024-11-03 10:15:11.289792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.157 ms 00:20:43.258 [2024-11-03 10:15:11.289815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.297958] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:43.258 [2024-11-03 10:15:11.301060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.258 [2024-11-03 10:15:11.301189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:43.258 [2024-11-03 10:15:11.301280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.122 ms 00:20:43.258 [2024-11-03 10:15:11.301312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.301405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.258 [2024-11-03 10:15:11.301432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:43.258 [2024-11-03 10:15:11.301455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:43.258 [2024-11-03 10:15:11.301474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.301547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.258 [2024-11-03 10:15:11.301559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:43.258 [2024-11-03 10:15:11.301573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:43.258 [2024-11-03 10:15:11.301583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.301617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.258 [2024-11-03 10:15:11.301627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:43.258 [2024-11-03 10:15:11.301636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:43.258 [2024-11-03 10:15:11.301644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.301677] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:43.258 [2024-11-03 10:15:11.301689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.258 [2024-11-03 10:15:11.301699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:43.258 [2024-11-03 10:15:11.301707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:43.258 [2024-11-03 10:15:11.301715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.306826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.258 [2024-11-03 10:15:11.306874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:43.258 [2024-11-03 10:15:11.306886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.084 ms 00:20:43.258 [2024-11-03 10:15:11.306894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.306978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.258 [2024-11-03 10:15:11.306988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:43.258 [2024-11-03 10:15:11.306998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:43.258 [2024-11-03 10:15:11.307011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.258 [2024-11-03 10:15:11.308098] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.982 ms, result 0 00:20:44.209  [2024-11-03T10:15:13.513Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-03T10:15:14.458Z] Copying: 24/1024 [MB] (10 MBps) [2024-11-03T10:15:15.601Z] Copying: 35/1024 [MB] (10 MBps) [2024-11-03T10:15:16.546Z] Copying: 45/1024 [MB] (10 MBps) [2024-11-03T10:15:17.490Z] Copying: 56/1024 [MB] (10 MBps) [2024-11-03T10:15:18.433Z] Copying: 77/1024 [MB] (21 MBps) [2024-11-03T10:15:19.377Z] Copying: 88/1024 [MB] (10 MBps) [2024-11-03T10:15:20.321Z] Copying: 111/1024 [MB] (23 MBps) [2024-11-03T10:15:21.708Z] Copying: 154/1024 [MB] (42 MBps) [2024-11-03T10:15:22.651Z] Copying: 173/1024 [MB] (19 MBps) [2024-11-03T10:15:23.596Z] Copying: 192/1024 [MB] (18 MBps) [2024-11-03T10:15:24.539Z] Copying: 204/1024 [MB] (12 MBps) [2024-11-03T10:15:25.484Z] Copying: 217/1024 [MB] (12 MBps) [2024-11-03T10:15:26.427Z] Copying: 233/1024 [MB] (16 MBps) [2024-11-03T10:15:27.371Z] Copying: 249/1024 [MB] (15 MBps) [2024-11-03T10:15:28.757Z] Copying: 292/1024 [MB] (43 MBps) [2024-11-03T10:15:29.377Z] Copying: 337/1024 [MB] (44 MBps) [2024-11-03T10:15:30.334Z] Copying: 360/1024 [MB] (23 MBps) [2024-11-03T10:15:31.722Z] Copying: 381/1024 [MB] (21 MBps) [2024-11-03T10:15:32.667Z] Copying: 402/1024 [MB] (21 MBps) [2024-11-03T10:15:33.611Z] Copying: 422/1024 [MB] (19 MBps) [2024-11-03T10:15:34.556Z] Copying: 434/1024 [MB] (11 MBps) [2024-11-03T10:15:35.498Z] Copying: 466/1024 [MB] (31 MBps) [2024-11-03T10:15:36.443Z] Copying: 510/1024 [MB] (44 MBps) [2024-11-03T10:15:37.387Z] Copying: 538/1024 [MB] (27 MBps) [2024-11-03T10:15:38.333Z] Copying: 552/1024 [MB] (14 MBps) [2024-11-03T10:15:39.721Z] Copying: 570/1024 [MB] (18 MBps) [2024-11-03T10:15:40.664Z] Copying: 589/1024 [MB] (19 MBps) [2024-11-03T10:15:41.607Z] Copying: 604/1024 [MB] (15 MBps) [2024-11-03T10:15:42.549Z] Copying: 621/1024 [MB] (16 MBps) [2024-11-03T10:15:43.493Z] Copying: 634/1024 [MB] (13 MBps) [2024-11-03T10:15:44.434Z] Copying: 650/1024 [MB] (15 MBps) [2024-11-03T10:15:45.377Z] Copying: 667/1024 [MB] (16 MBps) [2024-11-03T10:15:46.324Z] Copying: 712/1024 [MB] (44 MBps) [2024-11-03T10:15:47.349Z] Copying: 729/1024 [MB] (17 MBps) [2024-11-03T10:15:48.738Z] Copying: 755/1024 [MB] (25 MBps) [2024-11-03T10:15:49.681Z] Copying: 773/1024 [MB] (18 MBps) [2024-11-03T10:15:50.627Z] Copying: 809/1024 [MB] (36 MBps) [2024-11-03T10:15:51.568Z] Copying: 832/1024 [MB] (22 MBps) [2024-11-03T10:15:52.512Z] Copying: 860/1024 [MB] (27 MBps) [2024-11-03T10:15:53.455Z] Copying: 881/1024 [MB] (21 MBps) [2024-11-03T10:15:54.400Z] Copying: 895/1024 [MB] (14 MBps) [2024-11-03T10:15:55.342Z] Copying: 911/1024 [MB] (15 MBps) [2024-11-03T10:15:56.731Z] Copying: 937/1024 [MB] (26 MBps) [2024-11-03T10:15:57.675Z] Copying: 954/1024 [MB] (17 MBps) [2024-11-03T10:15:58.621Z] Copying: 974/1024 [MB] (19 MBps) [2024-11-03T10:15:59.564Z] Copying: 990/1024 [MB] (15 MBps) [2024-11-03T10:16:00.511Z] Copying: 1000/1024 [MB] (10 MBps) [2024-11-03T10:16:01.456Z] Copying: 1015/1024 [MB] (14 MBps) [2024-11-03T10:16:02.028Z] Copying: 1047920/1048576 [kB] (8192 kBps) [2024-11-03T10:16:02.028Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-03 10:16:01.934808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.666 [2024-11-03 10:16:01.935266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:33.666 [2024-11-03 10:16:01.935506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:33.667 [2024-11-03 10:16:01.935552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.667 [2024-11-03 10:16:01.938969] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:33.667 [2024-11-03 10:16:01.940843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.667 [2024-11-03 10:16:01.941006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:33.667 [2024-11-03 10:16:01.941073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:21:33.667 [2024-11-03 10:16:01.941099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.667 [2024-11-03 10:16:01.952583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.667 [2024-11-03 10:16:01.952754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:33.667 [2024-11-03 10:16:01.952820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.708 ms 00:21:33.667 [2024-11-03 10:16:01.952845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.667 [2024-11-03 10:16:01.977162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.667 [2024-11-03 10:16:01.977341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:33.667 [2024-11-03 10:16:01.977413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.284 ms 00:21:33.667 [2024-11-03 10:16:01.977448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.667 [2024-11-03 10:16:01.983663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.667 [2024-11-03 10:16:01.983820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:33.667 [2024-11-03 10:16:01.983884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:21:33.667 [2024-11-03 10:16:01.983908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.667 [2024-11-03 10:16:01.986492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.667 [2024-11-03 10:16:01.986635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:33.667 [2024-11-03 10:16:01.986689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.523 ms 00:21:33.667 [2024-11-03 10:16:01.986710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.667 [2024-11-03 10:16:01.991880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.667 [2024-11-03 10:16:01.992029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:33.667 [2024-11-03 10:16:01.992082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.123 ms 00:21:33.667 [2024-11-03 10:16:01.992105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.930 [2024-11-03 10:16:02.261716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.930 [2024-11-03 10:16:02.261902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:33.930 [2024-11-03 10:16:02.261959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 269.320 ms 00:21:33.930 [2024-11-03 10:16:02.261982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.930 [2024-11-03 10:16:02.264627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.931 [2024-11-03 10:16:02.264787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:33.931 [2024-11-03 10:16:02.264848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:21:33.931 [2024-11-03 10:16:02.264872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.931 [2024-11-03 10:16:02.266703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.931 [2024-11-03 10:16:02.266853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:33.931 [2024-11-03 10:16:02.266913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.785 ms 00:21:33.931 [2024-11-03 10:16:02.266935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.931 [2024-11-03 10:16:02.268587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.931 [2024-11-03 10:16:02.268749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:33.931 [2024-11-03 10:16:02.268810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:21:33.931 [2024-11-03 10:16:02.268833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.931 [2024-11-03 10:16:02.270475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.931 [2024-11-03 10:16:02.270632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:33.931 [2024-11-03 10:16:02.270692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:21:33.931 [2024-11-03 10:16:02.270714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.931 [2024-11-03 10:16:02.270756] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:33.931 [2024-11-03 10:16:02.270783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107008 / 261120 wr_cnt: 1 state: open 00:21:33.931 [2024-11-03 10:16:02.270815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.270845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.270873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.270949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.270980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.271995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:33.931 [2024-11-03 10:16:02.272303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:33.932 [2024-11-03 10:16:02.272595] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:33.932 [2024-11-03 10:16:02.272604] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7126a7fa-c0d4-4fad-a178-ffc386b8e8f1 00:21:33.932 [2024-11-03 10:16:02.272613] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107008 00:21:33.932 [2024-11-03 10:16:02.272620] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107968 00:21:33.932 [2024-11-03 10:16:02.272627] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107008 00:21:33.932 [2024-11-03 10:16:02.272648] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:21:33.932 [2024-11-03 10:16:02.272656] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:33.932 [2024-11-03 10:16:02.272668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:33.932 [2024-11-03 10:16:02.272676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:33.932 [2024-11-03 10:16:02.272685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:33.932 [2024-11-03 10:16:02.272692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:33.932 [2024-11-03 10:16:02.272700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.932 [2024-11-03 10:16:02.272712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:33.932 [2024-11-03 10:16:02.272727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:21:33.932 [2024-11-03 10:16:02.272736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.932 [2024-11-03 10:16:02.275045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.932 [2024-11-03 10:16:02.275193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:33.932 [2024-11-03 10:16:02.275210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:21:33.932 [2024-11-03 10:16:02.275219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.932 [2024-11-03 10:16:02.275361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.932 [2024-11-03 10:16:02.275371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:33.932 [2024-11-03 10:16:02.275380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:33.932 [2024-11-03 10:16:02.275391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.932 [2024-11-03 10:16:02.281960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.932 [2024-11-03 10:16:02.282119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.932 [2024-11-03 10:16:02.282136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.932 [2024-11-03 10:16:02.282144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.932 [2024-11-03 10:16:02.282205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.932 [2024-11-03 10:16:02.282213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.932 [2024-11-03 10:16:02.282265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.932 [2024-11-03 10:16:02.282275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.932 [2024-11-03 10:16:02.282335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.932 [2024-11-03 10:16:02.282351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.932 [2024-11-03 10:16:02.282359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.932 [2024-11-03 10:16:02.282372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.932 [2024-11-03 10:16:02.282388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.932 [2024-11-03 10:16:02.282396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.932 [2024-11-03 10:16:02.282405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.932 [2024-11-03 10:16:02.282413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.295564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.195 [2024-11-03 10:16:02.295618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.195 [2024-11-03 10:16:02.295630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.195 [2024-11-03 10:16:02.295639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.305625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.195 [2024-11-03 10:16:02.305672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.195 [2024-11-03 10:16:02.305684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.195 [2024-11-03 10:16:02.305692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.305771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.195 [2024-11-03 10:16:02.305781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.195 [2024-11-03 10:16:02.305792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.195 [2024-11-03 10:16:02.305801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.305840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.195 [2024-11-03 10:16:02.305850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.195 [2024-11-03 10:16:02.305858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.195 [2024-11-03 10:16:02.305866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.305935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.195 [2024-11-03 10:16:02.305946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.195 [2024-11-03 10:16:02.305954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.195 [2024-11-03 10:16:02.305965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.305993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.195 [2024-11-03 10:16:02.306002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:34.195 [2024-11-03 10:16:02.306011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.195 [2024-11-03 10:16:02.306020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.306059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.195 [2024-11-03 10:16:02.306068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.195 [2024-11-03 10:16:02.306077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.195 [2024-11-03 10:16:02.306088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.306133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.195 [2024-11-03 10:16:02.306148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.195 [2024-11-03 10:16:02.306157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.195 [2024-11-03 10:16:02.306165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.195 [2024-11-03 10:16:02.306358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.707 ms, result 0 00:21:35.175 00:21:35.175 00:21:35.175 10:16:03 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:35.175 [2024-11-03 10:16:03.304713] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:35.175 [2024-11-03 10:16:03.304856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88205 ] 00:21:35.175 [2024-11-03 10:16:03.443691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.175 [2024-11-03 10:16:03.493553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.438 [2024-11-03 10:16:03.607361] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.438 [2024-11-03 10:16:03.607448] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.438 [2024-11-03 10:16:03.767854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.767913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.438 [2024-11-03 10:16:03.767934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:35.438 [2024-11-03 10:16:03.767943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.768002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.768017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.438 [2024-11-03 10:16:03.768026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:35.438 [2024-11-03 10:16:03.768040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.768061] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.438 [2024-11-03 10:16:03.768368] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.438 [2024-11-03 10:16:03.768388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.768397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.438 [2024-11-03 10:16:03.768407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:21:35.438 [2024-11-03 10:16:03.768415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.770121] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.438 [2024-11-03 10:16:03.773772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.773965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.438 [2024-11-03 10:16:03.773986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.652 ms 00:21:35.438 [2024-11-03 10:16:03.773994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.774160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.774190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.438 [2024-11-03 10:16:03.774200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:35.438 [2024-11-03 10:16:03.774208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.782128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.782172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.438 [2024-11-03 10:16:03.782183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.859 ms 00:21:35.438 [2024-11-03 10:16:03.782190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.782313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.782324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.438 [2024-11-03 10:16:03.782333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:35.438 [2024-11-03 10:16:03.782340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.782406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.782416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.438 [2024-11-03 10:16:03.782424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:35.438 [2024-11-03 10:16:03.782432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.782461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.438 [2024-11-03 10:16:03.784539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.784702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.438 [2024-11-03 10:16:03.784718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:21:35.438 [2024-11-03 10:16:03.784726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.784765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.438 [2024-11-03 10:16:03.784781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.438 [2024-11-03 10:16:03.784790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:35.438 [2024-11-03 10:16:03.784797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.438 [2024-11-03 10:16:03.784819] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.438 [2024-11-03 10:16:03.784843] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:35.438 [2024-11-03 10:16:03.784885] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.438 [2024-11-03 10:16:03.784905] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:35.438 [2024-11-03 10:16:03.785011] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.438 [2024-11-03 10:16:03.785026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.438 [2024-11-03 10:16:03.785037] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:35.438 [2024-11-03 10:16:03.785048] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.438 [2024-11-03 10:16:03.785060] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.438 [2024-11-03 10:16:03.785068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.438 [2024-11-03 10:16:03.785081] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.438 [2024-11-03 10:16:03.785089] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.439 [2024-11-03 10:16:03.785097] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.439 [2024-11-03 10:16:03.785105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.439 [2024-11-03 10:16:03.785114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.439 [2024-11-03 10:16:03.785126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:21:35.439 [2024-11-03 10:16:03.785133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.439 [2024-11-03 10:16:03.785215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.439 [2024-11-03 10:16:03.785250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.439 [2024-11-03 10:16:03.785263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:35.439 [2024-11-03 10:16:03.785271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.439 [2024-11-03 10:16:03.785376] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.439 [2024-11-03 10:16:03.785387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.439 [2024-11-03 10:16:03.785397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.439 [2024-11-03 10:16:03.785423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.439 [2024-11-03 10:16:03.785449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.439 [2024-11-03 10:16:03.785465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.439 [2024-11-03 10:16:03.785473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.439 [2024-11-03 10:16:03.785481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.439 [2024-11-03 10:16:03.785488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.439 [2024-11-03 10:16:03.785496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:35.439 [2024-11-03 10:16:03.785505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.439 [2024-11-03 10:16:03.785524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.439 [2024-11-03 10:16:03.785704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.439 [2024-11-03 10:16:03.785729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.439 [2024-11-03 10:16:03.785750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.439 [2024-11-03 10:16:03.785771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.439 [2024-11-03 10:16:03.785795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.439 [2024-11-03 10:16:03.785810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.439 [2024-11-03 10:16:03.785817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:35.439 [2024-11-03 10:16:03.785823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.439 [2024-11-03 10:16:03.785831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.439 [2024-11-03 10:16:03.785838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:35.439 [2024-11-03 10:16:03.785845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.439 [2024-11-03 10:16:03.785857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:35.439 [2024-11-03 10:16:03.785864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785871] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.439 [2024-11-03 10:16:03.785879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.439 [2024-11-03 10:16:03.785886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.439 [2024-11-03 10:16:03.785904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.439 [2024-11-03 10:16:03.785913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.439 [2024-11-03 10:16:03.785920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.439 [2024-11-03 10:16:03.785927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.439 [2024-11-03 10:16:03.785933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.439 [2024-11-03 10:16:03.785940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.439 [2024-11-03 10:16:03.785948] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.439 [2024-11-03 10:16:03.785961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.439 [2024-11-03 10:16:03.785975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.439 [2024-11-03 10:16:03.785983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:35.439 [2024-11-03 10:16:03.785990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:35.439 [2024-11-03 10:16:03.785997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:35.439 [2024-11-03 10:16:03.786005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:35.439 [2024-11-03 10:16:03.786013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:35.439 [2024-11-03 10:16:03.786020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:35.439 [2024-11-03 10:16:03.786027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:35.439 [2024-11-03 10:16:03.786034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:35.439 [2024-11-03 10:16:03.786049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:35.439 [2024-11-03 10:16:03.786057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:35.439 [2024-11-03 10:16:03.786064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:35.439 [2024-11-03 10:16:03.786071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:35.439 [2024-11-03 10:16:03.786079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:35.439 [2024-11-03 10:16:03.786086] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.439 [2024-11-03 10:16:03.786094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.439 [2024-11-03 10:16:03.786105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.439 [2024-11-03 10:16:03.786113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.439 [2024-11-03 10:16:03.786120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.439 [2024-11-03 10:16:03.786127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.439 [2024-11-03 10:16:03.786135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.439 [2024-11-03 10:16:03.786143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.439 [2024-11-03 10:16:03.786150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:21:35.439 [2024-11-03 10:16:03.786158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.812384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.812488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.702 [2024-11-03 10:16:03.812520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.177 ms 00:21:35.702 [2024-11-03 10:16:03.812558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.812794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.812819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.702 [2024-11-03 10:16:03.812849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:21:35.702 [2024-11-03 10:16:03.812870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.825955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.826149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.702 [2024-11-03 10:16:03.826174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.942 ms 00:21:35.702 [2024-11-03 10:16:03.826187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.826251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.826260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.702 [2024-11-03 10:16:03.826269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:35.702 [2024-11-03 10:16:03.826277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.826844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.826881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.702 [2024-11-03 10:16:03.826898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:21:35.702 [2024-11-03 10:16:03.826914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.827068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.827094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.702 [2024-11-03 10:16:03.827104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:35.702 [2024-11-03 10:16:03.827113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.833867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.833911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.702 [2024-11-03 10:16:03.833928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.731 ms 00:21:35.702 [2024-11-03 10:16:03.833935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.837835] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:35.702 [2024-11-03 10:16:03.837884] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:35.702 [2024-11-03 10:16:03.837896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.837905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:35.702 [2024-11-03 10:16:03.837913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.864 ms 00:21:35.702 [2024-11-03 10:16:03.837927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.853862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.853910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:35.702 [2024-11-03 10:16:03.853934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.883 ms 00:21:35.702 [2024-11-03 10:16:03.853943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.856824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.856989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:35.702 [2024-11-03 10:16:03.857006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:21:35.702 [2024-11-03 10:16:03.857014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.859695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.859739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:35.702 [2024-11-03 10:16:03.859749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.645 ms 00:21:35.702 [2024-11-03 10:16:03.859757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.860120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.860132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.702 [2024-11-03 10:16:03.860156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:21:35.702 [2024-11-03 10:16:03.860167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.884426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.884627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:35.702 [2024-11-03 10:16:03.884689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.240 ms 00:21:35.702 [2024-11-03 10:16:03.884713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.893385] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:35.702 [2024-11-03 10:16:03.896523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.896654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.702 [2024-11-03 10:16:03.896717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.758 ms 00:21:35.702 [2024-11-03 10:16:03.896728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.896807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.896819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:35.702 [2024-11-03 10:16:03.896830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:35.702 [2024-11-03 10:16:03.896841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.898651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.898691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.702 [2024-11-03 10:16:03.898702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:21:35.702 [2024-11-03 10:16:03.898719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.898750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.898765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.702 [2024-11-03 10:16:03.898774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:35.702 [2024-11-03 10:16:03.898782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.898820] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:35.702 [2024-11-03 10:16:03.898833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.898842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:35.702 [2024-11-03 10:16:03.898852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:35.702 [2024-11-03 10:16:03.898859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.904548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.904600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.702 [2024-11-03 10:16:03.904612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.666 ms 00:21:35.702 [2024-11-03 10:16:03.904620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.702 [2024-11-03 10:16:03.904711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.702 [2024-11-03 10:16:03.904723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.702 [2024-11-03 10:16:03.904732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:35.702 [2024-11-03 10:16:03.904741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.703 [2024-11-03 10:16:03.906074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.756 ms, result 0 00:21:37.091  [2024-11-03T10:16:06.398Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-03T10:16:07.344Z] Copying: 27/1024 [MB] (14 MBps) [2024-11-03T10:16:08.289Z] Copying: 40/1024 [MB] (13 MBps) [2024-11-03T10:16:09.234Z] Copying: 57/1024 [MB] (16 MBps) [2024-11-03T10:16:10.181Z] Copying: 74/1024 [MB] (16 MBps) [2024-11-03T10:16:11.127Z] Copying: 89/1024 [MB] (14 MBps) [2024-11-03T10:16:12.515Z] Copying: 100/1024 [MB] (11 MBps) [2024-11-03T10:16:13.460Z] Copying: 110/1024 [MB] (10 MBps) [2024-11-03T10:16:14.406Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-03T10:16:15.351Z] Copying: 140/1024 [MB] (18 MBps) [2024-11-03T10:16:16.297Z] Copying: 159/1024 [MB] (19 MBps) [2024-11-03T10:16:17.241Z] Copying: 180/1024 [MB] (20 MBps) [2024-11-03T10:16:18.186Z] Copying: 197/1024 [MB] (17 MBps) [2024-11-03T10:16:19.130Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-03T10:16:20.519Z] Copying: 225/1024 [MB] (17 MBps) [2024-11-03T10:16:21.097Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-03T10:16:22.484Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-03T10:16:23.430Z] Copying: 266/1024 [MB] (18 MBps) [2024-11-03T10:16:24.375Z] Copying: 285/1024 [MB] (18 MBps) [2024-11-03T10:16:25.322Z] Copying: 302/1024 [MB] (17 MBps) [2024-11-03T10:16:26.266Z] Copying: 319/1024 [MB] (16 MBps) [2024-11-03T10:16:27.212Z] Copying: 334/1024 [MB] (14 MBps) [2024-11-03T10:16:28.157Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-03T10:16:29.101Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-03T10:16:30.491Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-03T10:16:31.435Z] Copying: 381/1024 [MB] (14 MBps) [2024-11-03T10:16:32.379Z] Copying: 396/1024 [MB] (15 MBps) [2024-11-03T10:16:33.326Z] Copying: 408/1024 [MB] (11 MBps) [2024-11-03T10:16:34.272Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-03T10:16:35.218Z] Copying: 434/1024 [MB] (14 MBps) [2024-11-03T10:16:36.163Z] Copying: 445/1024 [MB] (11 MBps) [2024-11-03T10:16:37.108Z] Copying: 462/1024 [MB] (16 MBps) [2024-11-03T10:16:38.567Z] Copying: 477/1024 [MB] (15 MBps) [2024-11-03T10:16:39.141Z] Copying: 491/1024 [MB] (14 MBps) [2024-11-03T10:16:40.529Z] Copying: 507/1024 [MB] (15 MBps) [2024-11-03T10:16:41.101Z] Copying: 527/1024 [MB] (20 MBps) [2024-11-03T10:16:42.490Z] Copying: 550/1024 [MB] (22 MBps) [2024-11-03T10:16:43.434Z] Copying: 569/1024 [MB] (19 MBps) [2024-11-03T10:16:44.380Z] Copying: 584/1024 [MB] (14 MBps) [2024-11-03T10:16:45.325Z] Copying: 602/1024 [MB] (17 MBps) [2024-11-03T10:16:46.270Z] Copying: 614/1024 [MB] (12 MBps) [2024-11-03T10:16:47.216Z] Copying: 626/1024 [MB] (12 MBps) [2024-11-03T10:16:48.160Z] Copying: 644/1024 [MB] (18 MBps) [2024-11-03T10:16:49.104Z] Copying: 666/1024 [MB] (21 MBps) [2024-11-03T10:16:50.491Z] Copying: 686/1024 [MB] (19 MBps) [2024-11-03T10:16:51.434Z] Copying: 697/1024 [MB] (10 MBps) [2024-11-03T10:16:52.378Z] Copying: 715/1024 [MB] (17 MBps) [2024-11-03T10:16:53.321Z] Copying: 737/1024 [MB] (22 MBps) [2024-11-03T10:16:54.266Z] Copying: 750/1024 [MB] (12 MBps) [2024-11-03T10:16:55.211Z] Copying: 761/1024 [MB] (10 MBps) [2024-11-03T10:16:56.204Z] Copying: 773/1024 [MB] (12 MBps) [2024-11-03T10:16:57.149Z] Copying: 788/1024 [MB] (14 MBps) [2024-11-03T10:16:58.094Z] Copying: 800/1024 [MB] (12 MBps) [2024-11-03T10:16:59.482Z] Copying: 811/1024 [MB] (10 MBps) [2024-11-03T10:17:00.426Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-03T10:17:01.369Z] Copying: 835/1024 [MB] (13 MBps) [2024-11-03T10:17:02.312Z] Copying: 845/1024 [MB] (10 MBps) [2024-11-03T10:17:03.255Z] Copying: 856/1024 [MB] (10 MBps) [2024-11-03T10:17:04.197Z] Copying: 867/1024 [MB] (10 MBps) [2024-11-03T10:17:05.140Z] Copying: 877/1024 [MB] (10 MBps) [2024-11-03T10:17:06.525Z] Copying: 888/1024 [MB] (10 MBps) [2024-11-03T10:17:07.097Z] Copying: 898/1024 [MB] (10 MBps) [2024-11-03T10:17:08.483Z] Copying: 909/1024 [MB] (10 MBps) [2024-11-03T10:17:09.423Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-03T10:17:10.367Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-03T10:17:11.310Z] Copying: 945/1024 [MB] (14 MBps) [2024-11-03T10:17:12.253Z] Copying: 963/1024 [MB] (17 MBps) [2024-11-03T10:17:13.256Z] Copying: 979/1024 [MB] (16 MBps) [2024-11-03T10:17:14.199Z] Copying: 989/1024 [MB] (10 MBps) [2024-11-03T10:17:15.144Z] Copying: 1000/1024 [MB] (10 MBps) [2024-11-03T10:17:16.087Z] Copying: 1011/1024 [MB] (10 MBps) [2024-11-03T10:17:16.087Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 10:17:15.962658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.725 [2024-11-03 10:17:15.962744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:47.725 [2024-11-03 10:17:15.962762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.725 [2024-11-03 10:17:15.962772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.725 [2024-11-03 10:17:15.962799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:47.725 [2024-11-03 10:17:15.963628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.725 [2024-11-03 10:17:15.963659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:47.725 [2024-11-03 10:17:15.963672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:22:47.725 [2024-11-03 10:17:15.963683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.725 [2024-11-03 10:17:15.963957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.725 [2024-11-03 10:17:15.964032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:47.725 [2024-11-03 10:17:15.964045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:22:47.725 [2024-11-03 10:17:15.964055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.725 [2024-11-03 10:17:15.970401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.725 [2024-11-03 10:17:15.970442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:47.725 [2024-11-03 10:17:15.970454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:22:47.725 [2024-11-03 10:17:15.970463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.725 [2024-11-03 10:17:15.976630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.725 [2024-11-03 10:17:15.976669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:47.725 [2024-11-03 10:17:15.976680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:22:47.725 [2024-11-03 10:17:15.976689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.725 [2024-11-03 10:17:15.979830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.725 [2024-11-03 10:17:15.979867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:47.725 [2024-11-03 10:17:15.979877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.073 ms 00:22:47.725 [2024-11-03 10:17:15.979884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.725 [2024-11-03 10:17:15.985019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.725 [2024-11-03 10:17:15.985072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:47.725 [2024-11-03 10:17:15.985085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.087 ms 00:22:47.725 [2024-11-03 10:17:15.985094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.299 [2024-11-03 10:17:16.367466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.299 [2024-11-03 10:17:16.367537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:48.299 [2024-11-03 10:17:16.367563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 382.300 ms 00:22:48.299 [2024-11-03 10:17:16.367577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.299 [2024-11-03 10:17:16.370561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.299 [2024-11-03 10:17:16.370609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:48.299 [2024-11-03 10:17:16.370621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:22:48.299 [2024-11-03 10:17:16.370630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.299 [2024-11-03 10:17:16.372610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.299 [2024-11-03 10:17:16.372655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:48.299 [2024-11-03 10:17:16.372665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:22:48.299 [2024-11-03 10:17:16.372673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.299 [2024-11-03 10:17:16.374498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.299 [2024-11-03 10:17:16.374545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:48.299 [2024-11-03 10:17:16.374557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.782 ms 00:22:48.299 [2024-11-03 10:17:16.374565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.299 [2024-11-03 10:17:16.376385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.299 [2024-11-03 10:17:16.376572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:48.299 [2024-11-03 10:17:16.376590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:22:48.299 [2024-11-03 10:17:16.376598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.299 [2024-11-03 10:17:16.376634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:48.299 [2024-11-03 10:17:16.376650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:48.299 [2024-11-03 10:17:16.376662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:48.299 [2024-11-03 10:17:16.376921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.376992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:48.300 [2024-11-03 10:17:16.377507] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:48.300 [2024-11-03 10:17:16.377515] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7126a7fa-c0d4-4fad-a178-ffc386b8e8f1 00:22:48.300 [2024-11-03 10:17:16.377524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:48.300 [2024-11-03 10:17:16.377532] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 25024 00:22:48.300 [2024-11-03 10:17:16.377540] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 24064 00:22:48.300 [2024-11-03 10:17:16.377555] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0399 00:22:48.300 [2024-11-03 10:17:16.377563] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:48.300 [2024-11-03 10:17:16.377570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:48.300 [2024-11-03 10:17:16.377578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:48.300 [2024-11-03 10:17:16.377585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:48.300 [2024-11-03 10:17:16.377591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:48.300 [2024-11-03 10:17:16.377599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.300 [2024-11-03 10:17:16.377607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:48.300 [2024-11-03 10:17:16.377615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:22:48.300 [2024-11-03 10:17:16.377628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.300 [2024-11-03 10:17:16.379919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.300 [2024-11-03 10:17:16.380065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:48.300 [2024-11-03 10:17:16.380090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:22:48.300 [2024-11-03 10:17:16.380099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.300 [2024-11-03 10:17:16.380269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.300 [2024-11-03 10:17:16.380286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:48.300 [2024-11-03 10:17:16.380298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:22:48.300 [2024-11-03 10:17:16.380306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.300 [2024-11-03 10:17:16.386941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.300 [2024-11-03 10:17:16.387105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:48.300 [2024-11-03 10:17:16.387124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.300 [2024-11-03 10:17:16.387133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.300 [2024-11-03 10:17:16.387207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.300 [2024-11-03 10:17:16.387216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:48.300 [2024-11-03 10:17:16.387262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.387271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.387319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.387334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:48.301 [2024-11-03 10:17:16.387347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.387355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.387371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.387380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:48.301 [2024-11-03 10:17:16.387389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.387397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.400825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.400877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:48.301 [2024-11-03 10:17:16.400889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.400898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.411848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.411912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:48.301 [2024-11-03 10:17:16.411924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.411934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.411985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.411995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:48.301 [2024-11-03 10:17:16.412008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.412017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.412054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.412067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:48.301 [2024-11-03 10:17:16.412076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.412084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.412157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.412192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:48.301 [2024-11-03 10:17:16.412201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.412213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.412268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.412279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:48.301 [2024-11-03 10:17:16.412288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.412296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.412338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.412347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.301 [2024-11-03 10:17:16.412356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.412367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.412414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.301 [2024-11-03 10:17:16.412424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.301 [2024-11-03 10:17:16.412433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.301 [2024-11-03 10:17:16.412441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.301 [2024-11-03 10:17:16.412571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 449.883 ms, result 0 00:22:48.301 00:22:48.301 00:22:48.301 10:17:16 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:50.849 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:50.849 10:17:18 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:50.849 10:17:18 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:50.849 10:17:18 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86063 00:22:50.849 10:17:19 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86063 ']' 00:22:50.849 Process with pid 86063 is not found 00:22:50.849 10:17:19 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86063 00:22:50.849 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86063) - No such process 00:22:50.849 10:17:19 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86063 is not found' 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:50.849 Remove shared memory files 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:50.849 10:17:19 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:50.849 ************************************ 00:22:50.849 END TEST ftl_restore 00:22:50.849 ************************************ 00:22:50.849 00:22:50.849 real 4m41.709s 00:22:50.849 user 4m28.618s 00:22:50.849 sys 0m12.854s 00:22:50.849 10:17:19 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.849 10:17:19 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:50.849 10:17:19 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:50.849 10:17:19 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:50.849 10:17:19 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.849 10:17:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:50.849 ************************************ 00:22:50.849 START TEST ftl_dirty_shutdown 00:22:50.849 ************************************ 00:22:50.849 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:50.849 * Looking for test storage... 00:22:50.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:50.849 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:50.849 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:50.849 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.111 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:51.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.112 --rc genhtml_branch_coverage=1 00:22:51.112 --rc genhtml_function_coverage=1 00:22:51.112 --rc genhtml_legend=1 00:22:51.112 --rc geninfo_all_blocks=1 00:22:51.112 --rc geninfo_unexecuted_blocks=1 00:22:51.112 00:22:51.112 ' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:51.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.112 --rc genhtml_branch_coverage=1 00:22:51.112 --rc genhtml_function_coverage=1 00:22:51.112 --rc genhtml_legend=1 00:22:51.112 --rc geninfo_all_blocks=1 00:22:51.112 --rc geninfo_unexecuted_blocks=1 00:22:51.112 00:22:51.112 ' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:51.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.112 --rc genhtml_branch_coverage=1 00:22:51.112 --rc genhtml_function_coverage=1 00:22:51.112 --rc genhtml_legend=1 00:22:51.112 --rc geninfo_all_blocks=1 00:22:51.112 --rc geninfo_unexecuted_blocks=1 00:22:51.112 00:22:51.112 ' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:51.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.112 --rc genhtml_branch_coverage=1 00:22:51.112 --rc genhtml_function_coverage=1 00:22:51.112 --rc genhtml_legend=1 00:22:51.112 --rc geninfo_all_blocks=1 00:22:51.112 --rc geninfo_unexecuted_blocks=1 00:22:51.112 00:22:51.112 ' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89051 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89051 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89051 ']' 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.112 10:17:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:51.112 [2024-11-03 10:17:19.359029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:51.112 [2024-11-03 10:17:19.359452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89051 ] 00:22:51.373 [2024-11-03 10:17:19.497376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.373 [2024-11-03 10:17:19.547839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.943 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.943 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:51.943 10:17:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:51.943 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:51.943 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:51.943 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:51.943 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:51.943 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:52.205 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:52.205 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:52.205 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:52.205 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:52.205 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:52.205 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:52.205 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:52.205 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:52.467 { 00:22:52.467 "name": "nvme0n1", 00:22:52.467 "aliases": [ 00:22:52.467 "7b82f8e9-afc1-427d-b7ae-044d6d874831" 00:22:52.467 ], 00:22:52.467 "product_name": "NVMe disk", 00:22:52.467 "block_size": 4096, 00:22:52.467 "num_blocks": 1310720, 00:22:52.467 "uuid": "7b82f8e9-afc1-427d-b7ae-044d6d874831", 00:22:52.467 "numa_id": -1, 00:22:52.467 "assigned_rate_limits": { 00:22:52.467 "rw_ios_per_sec": 0, 00:22:52.467 "rw_mbytes_per_sec": 0, 00:22:52.467 "r_mbytes_per_sec": 0, 00:22:52.467 "w_mbytes_per_sec": 0 00:22:52.467 }, 00:22:52.467 "claimed": true, 00:22:52.467 "claim_type": "read_many_write_one", 00:22:52.467 "zoned": false, 00:22:52.467 "supported_io_types": { 00:22:52.467 "read": true, 00:22:52.467 "write": true, 00:22:52.467 "unmap": true, 00:22:52.467 "flush": true, 00:22:52.467 "reset": true, 00:22:52.467 "nvme_admin": true, 00:22:52.467 "nvme_io": true, 00:22:52.467 "nvme_io_md": false, 00:22:52.467 "write_zeroes": true, 00:22:52.467 "zcopy": false, 00:22:52.467 "get_zone_info": false, 00:22:52.467 "zone_management": false, 00:22:52.467 "zone_append": false, 00:22:52.467 "compare": true, 00:22:52.467 "compare_and_write": false, 00:22:52.467 "abort": true, 00:22:52.467 "seek_hole": false, 00:22:52.467 "seek_data": false, 00:22:52.467 "copy": true, 00:22:52.467 "nvme_iov_md": false 00:22:52.467 }, 00:22:52.467 "driver_specific": { 00:22:52.467 "nvme": [ 00:22:52.467 { 00:22:52.467 "pci_address": "0000:00:11.0", 00:22:52.467 "trid": { 00:22:52.467 "trtype": "PCIe", 00:22:52.467 "traddr": "0000:00:11.0" 00:22:52.467 }, 00:22:52.467 "ctrlr_data": { 00:22:52.467 "cntlid": 0, 00:22:52.467 "vendor_id": "0x1b36", 00:22:52.467 "model_number": "QEMU NVMe Ctrl", 00:22:52.467 "serial_number": "12341", 00:22:52.467 "firmware_revision": "8.0.0", 00:22:52.467 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:52.467 "oacs": { 00:22:52.467 "security": 0, 00:22:52.467 "format": 1, 00:22:52.467 "firmware": 0, 00:22:52.467 "ns_manage": 1 00:22:52.467 }, 00:22:52.467 "multi_ctrlr": false, 00:22:52.467 "ana_reporting": false 00:22:52.467 }, 00:22:52.467 "vs": { 00:22:52.467 "nvme_version": "1.4" 00:22:52.467 }, 00:22:52.467 "ns_data": { 00:22:52.467 "id": 1, 00:22:52.467 "can_share": false 00:22:52.467 } 00:22:52.467 } 00:22:52.467 ], 00:22:52.467 "mp_policy": "active_passive" 00:22:52.467 } 00:22:52.467 } 00:22:52.467 ]' 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:52.467 10:17:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:52.728 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=4b7f602c-0722-42d6-8142-c838ed8c1562 00:22:52.728 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:52.729 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b7f602c-0722-42d6-8142-c838ed8c1562 00:22:52.990 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:53.251 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=27eeed2e-eb8e-49be-ab50-29664a71ee68 00:22:53.251 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 27eeed2e-eb8e-49be-ab50-29664a71ee68 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0f746074-8647-4c6f-8db4-813f625f1458 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0f746074-8647-4c6f-8db4-813f625f1458 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=0f746074-8647-4c6f-8db4-813f625f1458 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0f746074-8647-4c6f-8db4-813f625f1458 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0f746074-8647-4c6f-8db4-813f625f1458 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:53.513 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f746074-8647-4c6f-8db4-813f625f1458 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:53.775 { 00:22:53.775 "name": "0f746074-8647-4c6f-8db4-813f625f1458", 00:22:53.775 "aliases": [ 00:22:53.775 "lvs/nvme0n1p0" 00:22:53.775 ], 00:22:53.775 "product_name": "Logical Volume", 00:22:53.775 "block_size": 4096, 00:22:53.775 "num_blocks": 26476544, 00:22:53.775 "uuid": "0f746074-8647-4c6f-8db4-813f625f1458", 00:22:53.775 "assigned_rate_limits": { 00:22:53.775 "rw_ios_per_sec": 0, 00:22:53.775 "rw_mbytes_per_sec": 0, 00:22:53.775 "r_mbytes_per_sec": 0, 00:22:53.775 "w_mbytes_per_sec": 0 00:22:53.775 }, 00:22:53.775 "claimed": false, 00:22:53.775 "zoned": false, 00:22:53.775 "supported_io_types": { 00:22:53.775 "read": true, 00:22:53.775 "write": true, 00:22:53.775 "unmap": true, 00:22:53.775 "flush": false, 00:22:53.775 "reset": true, 00:22:53.775 "nvme_admin": false, 00:22:53.775 "nvme_io": false, 00:22:53.775 "nvme_io_md": false, 00:22:53.775 "write_zeroes": true, 00:22:53.775 "zcopy": false, 00:22:53.775 "get_zone_info": false, 00:22:53.775 "zone_management": false, 00:22:53.775 "zone_append": false, 00:22:53.775 "compare": false, 00:22:53.775 "compare_and_write": false, 00:22:53.775 "abort": false, 00:22:53.775 "seek_hole": true, 00:22:53.775 "seek_data": true, 00:22:53.775 "copy": false, 00:22:53.775 "nvme_iov_md": false 00:22:53.775 }, 00:22:53.775 "driver_specific": { 00:22:53.775 "lvol": { 00:22:53.775 "lvol_store_uuid": "27eeed2e-eb8e-49be-ab50-29664a71ee68", 00:22:53.775 "base_bdev": "nvme0n1", 00:22:53.775 "thin_provision": true, 00:22:53.775 "num_allocated_clusters": 0, 00:22:53.775 "snapshot": false, 00:22:53.775 "clone": false, 00:22:53.775 "esnap_clone": false 00:22:53.775 } 00:22:53.775 } 00:22:53.775 } 00:22:53.775 ]' 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:53.775 10:17:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:54.037 10:17:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:54.037 10:17:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:54.037 10:17:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 0f746074-8647-4c6f-8db4-813f625f1458 00:22:54.037 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0f746074-8647-4c6f-8db4-813f625f1458 00:22:54.037 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:54.037 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:54.037 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:54.037 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f746074-8647-4c6f-8db4-813f625f1458 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:54.298 { 00:22:54.298 "name": "0f746074-8647-4c6f-8db4-813f625f1458", 00:22:54.298 "aliases": [ 00:22:54.298 "lvs/nvme0n1p0" 00:22:54.298 ], 00:22:54.298 "product_name": "Logical Volume", 00:22:54.298 "block_size": 4096, 00:22:54.298 "num_blocks": 26476544, 00:22:54.298 "uuid": "0f746074-8647-4c6f-8db4-813f625f1458", 00:22:54.298 "assigned_rate_limits": { 00:22:54.298 "rw_ios_per_sec": 0, 00:22:54.298 "rw_mbytes_per_sec": 0, 00:22:54.298 "r_mbytes_per_sec": 0, 00:22:54.298 "w_mbytes_per_sec": 0 00:22:54.298 }, 00:22:54.298 "claimed": false, 00:22:54.298 "zoned": false, 00:22:54.298 "supported_io_types": { 00:22:54.298 "read": true, 00:22:54.298 "write": true, 00:22:54.298 "unmap": true, 00:22:54.298 "flush": false, 00:22:54.298 "reset": true, 00:22:54.298 "nvme_admin": false, 00:22:54.298 "nvme_io": false, 00:22:54.298 "nvme_io_md": false, 00:22:54.298 "write_zeroes": true, 00:22:54.298 "zcopy": false, 00:22:54.298 "get_zone_info": false, 00:22:54.298 "zone_management": false, 00:22:54.298 "zone_append": false, 00:22:54.298 "compare": false, 00:22:54.298 "compare_and_write": false, 00:22:54.298 "abort": false, 00:22:54.298 "seek_hole": true, 00:22:54.298 "seek_data": true, 00:22:54.298 "copy": false, 00:22:54.298 "nvme_iov_md": false 00:22:54.298 }, 00:22:54.298 "driver_specific": { 00:22:54.298 "lvol": { 00:22:54.298 "lvol_store_uuid": "27eeed2e-eb8e-49be-ab50-29664a71ee68", 00:22:54.298 "base_bdev": "nvme0n1", 00:22:54.298 "thin_provision": true, 00:22:54.298 "num_allocated_clusters": 0, 00:22:54.298 "snapshot": false, 00:22:54.298 "clone": false, 00:22:54.298 "esnap_clone": false 00:22:54.298 } 00:22:54.298 } 00:22:54.298 } 00:22:54.298 ]' 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:54.298 10:17:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:54.559 10:17:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:54.559 10:17:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0f746074-8647-4c6f-8db4-813f625f1458 00:22:54.559 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0f746074-8647-4c6f-8db4-813f625f1458 00:22:54.559 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:54.559 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:54.559 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:54.559 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f746074-8647-4c6f-8db4-813f625f1458 00:22:54.819 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:54.819 { 00:22:54.819 "name": "0f746074-8647-4c6f-8db4-813f625f1458", 00:22:54.819 "aliases": [ 00:22:54.819 "lvs/nvme0n1p0" 00:22:54.819 ], 00:22:54.819 "product_name": "Logical Volume", 00:22:54.819 "block_size": 4096, 00:22:54.819 "num_blocks": 26476544, 00:22:54.819 "uuid": "0f746074-8647-4c6f-8db4-813f625f1458", 00:22:54.819 "assigned_rate_limits": { 00:22:54.819 "rw_ios_per_sec": 0, 00:22:54.819 "rw_mbytes_per_sec": 0, 00:22:54.819 "r_mbytes_per_sec": 0, 00:22:54.819 "w_mbytes_per_sec": 0 00:22:54.819 }, 00:22:54.819 "claimed": false, 00:22:54.819 "zoned": false, 00:22:54.819 "supported_io_types": { 00:22:54.819 "read": true, 00:22:54.819 "write": true, 00:22:54.819 "unmap": true, 00:22:54.819 "flush": false, 00:22:54.819 "reset": true, 00:22:54.819 "nvme_admin": false, 00:22:54.819 "nvme_io": false, 00:22:54.819 "nvme_io_md": false, 00:22:54.819 "write_zeroes": true, 00:22:54.819 "zcopy": false, 00:22:54.819 "get_zone_info": false, 00:22:54.819 "zone_management": false, 00:22:54.819 "zone_append": false, 00:22:54.819 "compare": false, 00:22:54.819 "compare_and_write": false, 00:22:54.819 "abort": false, 00:22:54.819 "seek_hole": true, 00:22:54.819 "seek_data": true, 00:22:54.819 "copy": false, 00:22:54.819 "nvme_iov_md": false 00:22:54.819 }, 00:22:54.819 "driver_specific": { 00:22:54.819 "lvol": { 00:22:54.820 "lvol_store_uuid": "27eeed2e-eb8e-49be-ab50-29664a71ee68", 00:22:54.820 "base_bdev": "nvme0n1", 00:22:54.820 "thin_provision": true, 00:22:54.820 "num_allocated_clusters": 0, 00:22:54.820 "snapshot": false, 00:22:54.820 "clone": false, 00:22:54.820 "esnap_clone": false 00:22:54.820 } 00:22:54.820 } 00:22:54.820 } 00:22:54.820 ]' 00:22:54.820 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:54.820 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:54.820 10:17:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0f746074-8647-4c6f-8db4-813f625f1458 --l2p_dram_limit 10' 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:54.820 10:17:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0f746074-8647-4c6f-8db4-813f625f1458 --l2p_dram_limit 10 -c nvc0n1p0 00:22:55.082 [2024-11-03 10:17:23.185999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.186043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:55.082 [2024-11-03 10:17:23.186055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:55.082 [2024-11-03 10:17:23.186063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.186105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.186114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.082 [2024-11-03 10:17:23.186121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:55.082 [2024-11-03 10:17:23.186129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.186145] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:55.082 [2024-11-03 10:17:23.186360] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:55.082 [2024-11-03 10:17:23.186371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.186378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.082 [2024-11-03 10:17:23.186411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:22:55.082 [2024-11-03 10:17:23.186419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.186468] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5e441cb6-b29b-4f7b-bae7-171e68ff8085 00:22:55.082 [2024-11-03 10:17:23.187403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.187423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:55.082 [2024-11-03 10:17:23.187432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:55.082 [2024-11-03 10:17:23.187438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.192128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.192156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.082 [2024-11-03 10:17:23.192172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.651 ms 00:22:55.082 [2024-11-03 10:17:23.192178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.192246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.192255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.082 [2024-11-03 10:17:23.192263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:55.082 [2024-11-03 10:17:23.192270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.192302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.192309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:55.082 [2024-11-03 10:17:23.192317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:55.082 [2024-11-03 10:17:23.192323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.192348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:55.082 [2024-11-03 10:17:23.193610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.193712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.082 [2024-11-03 10:17:23.193728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:22:55.082 [2024-11-03 10:17:23.193736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.193764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.193772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:55.082 [2024-11-03 10:17:23.193778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:55.082 [2024-11-03 10:17:23.193789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.193802] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:55.082 [2024-11-03 10:17:23.193912] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:55.082 [2024-11-03 10:17:23.193922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:55.082 [2024-11-03 10:17:23.193935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:55.082 [2024-11-03 10:17:23.193943] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:55.082 [2024-11-03 10:17:23.193952] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:55.082 [2024-11-03 10:17:23.193958] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:55.082 [2024-11-03 10:17:23.193968] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:55.082 [2024-11-03 10:17:23.193973] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:55.082 [2024-11-03 10:17:23.193980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:55.082 [2024-11-03 10:17:23.193987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.193994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:55.082 [2024-11-03 10:17:23.194000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:22:55.082 [2024-11-03 10:17:23.194006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.194070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.082 [2024-11-03 10:17:23.194078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:55.082 [2024-11-03 10:17:23.194084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:55.082 [2024-11-03 10:17:23.194091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.082 [2024-11-03 10:17:23.194161] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:55.082 [2024-11-03 10:17:23.194171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:55.082 [2024-11-03 10:17:23.194177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.082 [2024-11-03 10:17:23.194185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.082 [2024-11-03 10:17:23.194190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:55.082 [2024-11-03 10:17:23.194197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:55.082 [2024-11-03 10:17:23.194202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:55.082 [2024-11-03 10:17:23.194208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:55.082 [2024-11-03 10:17:23.194213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:55.082 [2024-11-03 10:17:23.194221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.082 [2024-11-03 10:17:23.194239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:55.082 [2024-11-03 10:17:23.194246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:55.082 [2024-11-03 10:17:23.194251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.083 [2024-11-03 10:17:23.194259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:55.083 [2024-11-03 10:17:23.194266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:55.083 [2024-11-03 10:17:23.194273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:55.083 [2024-11-03 10:17:23.194285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:55.083 [2024-11-03 10:17:23.194289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:55.083 [2024-11-03 10:17:23.194302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.083 [2024-11-03 10:17:23.194315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:55.083 [2024-11-03 10:17:23.194322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.083 [2024-11-03 10:17:23.194335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:55.083 [2024-11-03 10:17:23.194341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.083 [2024-11-03 10:17:23.194354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:55.083 [2024-11-03 10:17:23.194362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.083 [2024-11-03 10:17:23.194375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:55.083 [2024-11-03 10:17:23.194381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.083 [2024-11-03 10:17:23.194394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:55.083 [2024-11-03 10:17:23.194402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:55.083 [2024-11-03 10:17:23.194408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.083 [2024-11-03 10:17:23.194415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:55.083 [2024-11-03 10:17:23.194424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:55.083 [2024-11-03 10:17:23.194430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:55.083 [2024-11-03 10:17:23.194443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:55.083 [2024-11-03 10:17:23.194448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194455] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:55.083 [2024-11-03 10:17:23.194465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:55.083 [2024-11-03 10:17:23.194474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.083 [2024-11-03 10:17:23.194482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.083 [2024-11-03 10:17:23.194489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:55.083 [2024-11-03 10:17:23.194495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:55.083 [2024-11-03 10:17:23.194502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:55.083 [2024-11-03 10:17:23.194508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:55.083 [2024-11-03 10:17:23.194515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:55.083 [2024-11-03 10:17:23.194521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:55.083 [2024-11-03 10:17:23.194530] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:55.083 [2024-11-03 10:17:23.194542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.083 [2024-11-03 10:17:23.194551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:55.083 [2024-11-03 10:17:23.194558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:55.083 [2024-11-03 10:17:23.194565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:55.083 [2024-11-03 10:17:23.194571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:55.083 [2024-11-03 10:17:23.194580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:55.083 [2024-11-03 10:17:23.194586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:55.083 [2024-11-03 10:17:23.194596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:55.083 [2024-11-03 10:17:23.194602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:55.083 [2024-11-03 10:17:23.194609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:55.083 [2024-11-03 10:17:23.194616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:55.083 [2024-11-03 10:17:23.194623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:55.083 [2024-11-03 10:17:23.194629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:55.083 [2024-11-03 10:17:23.194637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:55.083 [2024-11-03 10:17:23.194647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:55.083 [2024-11-03 10:17:23.194654] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:55.083 [2024-11-03 10:17:23.194663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.083 [2024-11-03 10:17:23.194671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:55.083 [2024-11-03 10:17:23.194680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:55.083 [2024-11-03 10:17:23.194686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:55.083 [2024-11-03 10:17:23.194692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:55.083 [2024-11-03 10:17:23.194699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.083 [2024-11-03 10:17:23.194704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:55.083 [2024-11-03 10:17:23.194712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:22:55.083 [2024-11-03 10:17:23.194722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.083 [2024-11-03 10:17:23.194752] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:55.083 [2024-11-03 10:17:23.194763] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:58.387 [2024-11-03 10:17:26.697104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.387 [2024-11-03 10:17:26.697185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:58.387 [2024-11-03 10:17:26.697210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3502.331 ms 00:22:58.387 [2024-11-03 10:17:26.697220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.387 [2024-11-03 10:17:26.710708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.387 [2024-11-03 10:17:26.710939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.387 [2024-11-03 10:17:26.710967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.346 ms 00:22:58.387 [2024-11-03 10:17:26.710977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.387 [2024-11-03 10:17:26.711080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.387 [2024-11-03 10:17:26.711089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.387 [2024-11-03 10:17:26.711105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:58.387 [2024-11-03 10:17:26.711114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.387 [2024-11-03 10:17:26.722782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.387 [2024-11-03 10:17:26.722835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.387 [2024-11-03 10:17:26.722849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.608 ms 00:22:58.387 [2024-11-03 10:17:26.722858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.387 [2024-11-03 10:17:26.722894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.387 [2024-11-03 10:17:26.722903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.387 [2024-11-03 10:17:26.722918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:58.387 [2024-11-03 10:17:26.722926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.387 [2024-11-03 10:17:26.723508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.387 [2024-11-03 10:17:26.723532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.387 [2024-11-03 10:17:26.723546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:22:58.387 [2024-11-03 10:17:26.723555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.387 [2024-11-03 10:17:26.723687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.387 [2024-11-03 10:17:26.723698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.387 [2024-11-03 10:17:26.723710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:58.387 [2024-11-03 10:17:26.723722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.387 [2024-11-03 10:17:26.745936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.387 [2024-11-03 10:17:26.746000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.387 [2024-11-03 10:17:26.746017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.181 ms 00:22:58.387 [2024-11-03 10:17:26.746033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.757072] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:58.648 [2024-11-03 10:17:26.761004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.761054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.648 [2024-11-03 10:17:26.761066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.852 ms 00:22:58.648 [2024-11-03 10:17:26.761077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.851833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.852067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:58.648 [2024-11-03 10:17:26.852090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.721 ms 00:22:58.648 [2024-11-03 10:17:26.852114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.852362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.852378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.648 [2024-11-03 10:17:26.852388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:22:58.648 [2024-11-03 10:17:26.852399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.858354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.858534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:58.648 [2024-11-03 10:17:26.858553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.917 ms 00:22:58.648 [2024-11-03 10:17:26.858564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.863351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.863404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:58.648 [2024-11-03 10:17:26.863415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.740 ms 00:22:58.648 [2024-11-03 10:17:26.863425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.863764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.863777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.648 [2024-11-03 10:17:26.863786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:22:58.648 [2024-11-03 10:17:26.863799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.909314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.909502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:58.648 [2024-11-03 10:17:26.909522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.478 ms 00:22:58.648 [2024-11-03 10:17:26.909534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.916240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.916292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:58.648 [2024-11-03 10:17:26.916303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.631 ms 00:22:58.648 [2024-11-03 10:17:26.916314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.921097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.921148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:58.648 [2024-11-03 10:17:26.921159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.726 ms 00:22:58.648 [2024-11-03 10:17:26.921169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.926309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.926361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.648 [2024-11-03 10:17:26.926372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:22:58.648 [2024-11-03 10:17:26.926385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.926434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.926446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.648 [2024-11-03 10:17:26.926455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:58.648 [2024-11-03 10:17:26.926466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.926555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.648 [2024-11-03 10:17:26.926569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.648 [2024-11-03 10:17:26.926577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:58.648 [2024-11-03 10:17:26.926587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.648 [2024-11-03 10:17:26.927687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3741.164 ms, result 0 00:22:58.648 { 00:22:58.648 "name": "ftl0", 00:22:58.648 "uuid": "5e441cb6-b29b-4f7b-bae7-171e68ff8085" 00:22:58.648 } 00:22:58.648 10:17:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:58.648 10:17:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:58.909 10:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:58.909 10:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:58.909 10:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:59.169 /dev/nbd0 00:22:59.169 10:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:59.169 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:59.169 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:59.169 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:59.169 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:59.169 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:59.170 1+0 records in 00:22:59.170 1+0 records out 00:22:59.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048604 s, 8.4 MB/s 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:59.170 10:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:59.170 [2024-11-03 10:17:27.470951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:59.170 [2024-11-03 10:17:27.471086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89193 ] 00:22:59.430 [2024-11-03 10:17:27.605580] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.430 [2024-11-03 10:17:27.648147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.372  [2024-11-03T10:17:30.180Z] Copying: 261/1024 [MB] (261 MBps) [2024-11-03T10:17:30.751Z] Copying: 524/1024 [MB] (262 MBps) [2024-11-03T10:17:31.692Z] Copying: 785/1024 [MB] (261 MBps) [2024-11-03T10:17:31.952Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:23:03.590 00:23:03.590 10:17:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:05.490 10:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:05.490 [2024-11-03 10:17:33.810629] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:05.490 [2024-11-03 10:17:33.810826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89263 ] 00:23:05.749 [2024-11-03 10:17:33.944017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.749 [2024-11-03 10:17:33.976126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.691  [2024-11-03T10:17:36.440Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-03T10:17:37.379Z] Copying: 27/1024 [MB] (14 MBps) [2024-11-03T10:17:38.322Z] Copying: 51/1024 [MB] (23 MBps) [2024-11-03T10:17:39.260Z] Copying: 70/1024 [MB] (19 MBps) [2024-11-03T10:17:40.193Z] Copying: 84/1024 [MB] (13 MBps) [2024-11-03T10:17:41.125Z] Copying: 100/1024 [MB] (16 MBps) [2024-11-03T10:17:42.059Z] Copying: 125/1024 [MB] (24 MBps) [2024-11-03T10:17:43.433Z] Copying: 157/1024 [MB] (32 MBps) [2024-11-03T10:17:44.363Z] Copying: 172/1024 [MB] (14 MBps) [2024-11-03T10:17:45.297Z] Copying: 186/1024 [MB] (14 MBps) [2024-11-03T10:17:46.232Z] Copying: 198/1024 [MB] (11 MBps) [2024-11-03T10:17:47.166Z] Copying: 208/1024 [MB] (10 MBps) [2024-11-03T10:17:48.132Z] Copying: 223/1024 [MB] (15 MBps) [2024-11-03T10:17:49.066Z] Copying: 241/1024 [MB] (17 MBps) [2024-11-03T10:17:50.441Z] Copying: 255/1024 [MB] (14 MBps) [2024-11-03T10:17:51.375Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-03T10:17:52.310Z] Copying: 287/1024 [MB] (20 MBps) [2024-11-03T10:17:53.244Z] Copying: 303/1024 [MB] (15 MBps) [2024-11-03T10:17:54.178Z] Copying: 319/1024 [MB] (16 MBps) [2024-11-03T10:17:55.112Z] Copying: 335/1024 [MB] (15 MBps) [2024-11-03T10:17:56.045Z] Copying: 350/1024 [MB] (15 MBps) [2024-11-03T10:17:57.419Z] Copying: 364/1024 [MB] (13 MBps) [2024-11-03T10:17:58.353Z] Copying: 394/1024 [MB] (29 MBps) [2024-11-03T10:17:59.286Z] Copying: 418/1024 [MB] (24 MBps) [2024-11-03T10:18:00.220Z] Copying: 437/1024 [MB] (19 MBps) [2024-11-03T10:18:01.154Z] Copying: 457/1024 [MB] (19 MBps) [2024-11-03T10:18:02.087Z] Copying: 482/1024 [MB] (24 MBps) [2024-11-03T10:18:03.461Z] Copying: 502/1024 [MB] (19 MBps) [2024-11-03T10:18:04.028Z] Copying: 530/1024 [MB] (28 MBps) [2024-11-03T10:18:05.457Z] Copying: 547/1024 [MB] (16 MBps) [2024-11-03T10:18:06.390Z] Copying: 565/1024 [MB] (17 MBps) [2024-11-03T10:18:07.325Z] Copying: 581/1024 [MB] (16 MBps) [2024-11-03T10:18:08.259Z] Copying: 596/1024 [MB] (15 MBps) [2024-11-03T10:18:09.193Z] Copying: 614/1024 [MB] (18 MBps) [2024-11-03T10:18:10.127Z] Copying: 632/1024 [MB] (17 MBps) [2024-11-03T10:18:11.059Z] Copying: 650/1024 [MB] (17 MBps) [2024-11-03T10:18:12.433Z] Copying: 671/1024 [MB] (20 MBps) [2024-11-03T10:18:13.366Z] Copying: 689/1024 [MB] (18 MBps) [2024-11-03T10:18:14.298Z] Copying: 707/1024 [MB] (18 MBps) [2024-11-03T10:18:15.229Z] Copying: 729/1024 [MB] (21 MBps) [2024-11-03T10:18:16.163Z] Copying: 747/1024 [MB] (18 MBps) [2024-11-03T10:18:17.096Z] Copying: 764/1024 [MB] (16 MBps) [2024-11-03T10:18:18.027Z] Copying: 784/1024 [MB] (19 MBps) [2024-11-03T10:18:19.400Z] Copying: 809/1024 [MB] (24 MBps) [2024-11-03T10:18:20.332Z] Copying: 839/1024 [MB] (30 MBps) [2024-11-03T10:18:21.266Z] Copying: 858/1024 [MB] (18 MBps) [2024-11-03T10:18:22.231Z] Copying: 881/1024 [MB] (23 MBps) [2024-11-03T10:18:23.165Z] Copying: 905/1024 [MB] (23 MBps) [2024-11-03T10:18:24.100Z] Copying: 925/1024 [MB] (20 MBps) [2024-11-03T10:18:25.035Z] Copying: 943/1024 [MB] (17 MBps) [2024-11-03T10:18:26.411Z] Copying: 967/1024 [MB] (23 MBps) [2024-11-03T10:18:27.354Z] Copying: 988/1024 [MB] (20 MBps) [2024-11-03T10:18:28.298Z] Copying: 1008/1024 [MB] (19 MBps) [2024-11-03T10:18:28.559Z] Copying: 1018/1024 [MB] (10 MBps) [2024-11-03T10:18:28.820Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:24:00.458 00:24:00.458 10:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:00.458 10:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:00.716 10:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:00.977 [2024-11-03 10:18:29.135085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.135128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:00.977 [2024-11-03 10:18:29.135144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:00.977 [2024-11-03 10:18:29.135152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.135177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:00.977 [2024-11-03 10:18:29.135620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.135641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:00.977 [2024-11-03 10:18:29.135653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:24:00.977 [2024-11-03 10:18:29.135662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.137829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.137865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:00.977 [2024-11-03 10:18:29.137879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:24:00.977 [2024-11-03 10:18:29.137889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.153411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.153445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:00.977 [2024-11-03 10:18:29.153456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.505 ms 00:24:00.977 [2024-11-03 10:18:29.153467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.159631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.159667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:00.977 [2024-11-03 10:18:29.159679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:24:00.977 [2024-11-03 10:18:29.159688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.160965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.161100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:00.977 [2024-11-03 10:18:29.161114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:24:00.977 [2024-11-03 10:18:29.161125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.165366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.165402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:00.977 [2024-11-03 10:18:29.165414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.210 ms 00:24:00.977 [2024-11-03 10:18:29.165423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.165537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.165548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:00.977 [2024-11-03 10:18:29.165556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:00.977 [2024-11-03 10:18:29.165565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.167022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.167054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:00.977 [2024-11-03 10:18:29.167063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:24:00.977 [2024-11-03 10:18:29.167071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.977 [2024-11-03 10:18:29.168461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.977 [2024-11-03 10:18:29.168501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:00.978 [2024-11-03 10:18:29.168509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:24:00.978 [2024-11-03 10:18:29.168517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.978 [2024-11-03 10:18:29.169590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.978 [2024-11-03 10:18:29.169622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:00.978 [2024-11-03 10:18:29.169630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:24:00.978 [2024-11-03 10:18:29.169638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.978 [2024-11-03 10:18:29.170663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.978 [2024-11-03 10:18:29.170694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:00.978 [2024-11-03 10:18:29.170703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:24:00.978 [2024-11-03 10:18:29.170714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.978 [2024-11-03 10:18:29.170742] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:00.978 [2024-11-03 10:18:29.170757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.170993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:00.978 [2024-11-03 10:18:29.171435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:00.979 [2024-11-03 10:18:29.171620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:00.979 [2024-11-03 10:18:29.171627] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e441cb6-b29b-4f7b-bae7-171e68ff8085 00:24:00.979 [2024-11-03 10:18:29.171639] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:00.979 [2024-11-03 10:18:29.171646] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:00.979 [2024-11-03 10:18:29.171654] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:00.979 [2024-11-03 10:18:29.171662] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:00.979 [2024-11-03 10:18:29.171673] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:00.979 [2024-11-03 10:18:29.171681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:00.979 [2024-11-03 10:18:29.171690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:00.979 [2024-11-03 10:18:29.171697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:00.979 [2024-11-03 10:18:29.171705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:00.979 [2024-11-03 10:18:29.171712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.979 [2024-11-03 10:18:29.171732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:00.979 [2024-11-03 10:18:29.171740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:24:00.979 [2024-11-03 10:18:29.171749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.173097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.979 [2024-11-03 10:18:29.173126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:00.979 [2024-11-03 10:18:29.173135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:24:00.979 [2024-11-03 10:18:29.173144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.173216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.979 [2024-11-03 10:18:29.173237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:00.979 [2024-11-03 10:18:29.173246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:00.979 [2024-11-03 10:18:29.173257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.178089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.178121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.979 [2024-11-03 10:18:29.178132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.178141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.178192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.178201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.979 [2024-11-03 10:18:29.178208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.178219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.178277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.178293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.979 [2024-11-03 10:18:29.178301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.178309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.178326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.178335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.979 [2024-11-03 10:18:29.178342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.178350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.186414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.186450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.979 [2024-11-03 10:18:29.186459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.186468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.193686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.193725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.979 [2024-11-03 10:18:29.193735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.193744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.193842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.193861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.979 [2024-11-03 10:18:29.193870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.193880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.193913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.193924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.979 [2024-11-03 10:18:29.193931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.193940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.194003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.194017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.979 [2024-11-03 10:18:29.194025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.194034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.194062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.194073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:00.979 [2024-11-03 10:18:29.194080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.194089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.194122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.194136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.979 [2024-11-03 10:18:29.194144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.194152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.194192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.979 [2024-11-03 10:18:29.194202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.979 [2024-11-03 10:18:29.194210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.979 [2024-11-03 10:18:29.194219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.979 [2024-11-03 10:18:29.194360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.243 ms, result 0 00:24:00.979 true 00:24:00.979 10:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89051 00:24:00.979 10:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89051 00:24:00.979 10:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:00.979 [2024-11-03 10:18:29.269596] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:00.979 [2024-11-03 10:18:29.269703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89842 ] 00:24:01.239 [2024-11-03 10:18:29.399402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.239 [2024-11-03 10:18:29.436659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.180  [2024-11-03T10:18:31.927Z] Copying: 220/1024 [MB] (220 MBps) [2024-11-03T10:18:32.869Z] Copying: 481/1024 [MB] (261 MBps) [2024-11-03T10:18:33.810Z] Copying: 741/1024 [MB] (259 MBps) [2024-11-03T10:18:33.810Z] Copying: 1000/1024 [MB] (259 MBps) [2024-11-03T10:18:33.810Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:24:05.448 00:24:05.448 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89051 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:05.448 10:18:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:05.709 [2024-11-03 10:18:33.819459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:05.709 [2024-11-03 10:18:33.819587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89896 ] 00:24:05.709 [2024-11-03 10:18:33.954570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.709 [2024-11-03 10:18:33.991125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.970 [2024-11-03 10:18:34.072691] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:05.970 [2024-11-03 10:18:34.072741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:05.970 [2024-11-03 10:18:34.134395] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:05.970 [2024-11-03 10:18:34.134827] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:05.970 [2024-11-03 10:18:34.135028] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:06.233 [2024-11-03 10:18:34.362826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.362944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:06.233 [2024-11-03 10:18:34.362999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:06.233 [2024-11-03 10:18:34.363023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.363080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.363100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:06.233 [2024-11-03 10:18:34.363117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:06.233 [2024-11-03 10:18:34.363172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.363202] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:06.233 [2024-11-03 10:18:34.363436] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:06.233 [2024-11-03 10:18:34.363472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.363487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:06.233 [2024-11-03 10:18:34.363524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:24:06.233 [2024-11-03 10:18:34.363544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.364513] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:06.233 [2024-11-03 10:18:34.366473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.366563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:06.233 [2024-11-03 10:18:34.366609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.962 ms 00:24:06.233 [2024-11-03 10:18:34.366625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.366670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.366692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:06.233 [2024-11-03 10:18:34.366820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:06.233 [2024-11-03 10:18:34.366838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.371019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.371101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:06.233 [2024-11-03 10:18:34.371155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.130 ms 00:24:06.233 [2024-11-03 10:18:34.371172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.371252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.371328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:06.233 [2024-11-03 10:18:34.371346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:06.233 [2024-11-03 10:18:34.371361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.371405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.371425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:06.233 [2024-11-03 10:18:34.371439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:06.233 [2024-11-03 10:18:34.371456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.371514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:06.233 [2024-11-03 10:18:34.372663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.372735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:06.233 [2024-11-03 10:18:34.372780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:24:06.233 [2024-11-03 10:18:34.372797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.372832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.233 [2024-11-03 10:18:34.372852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:06.233 [2024-11-03 10:18:34.372896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:06.233 [2024-11-03 10:18:34.372913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.233 [2024-11-03 10:18:34.372941] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:06.233 [2024-11-03 10:18:34.372968] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:06.233 [2024-11-03 10:18:34.373054] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:06.233 [2024-11-03 10:18:34.373102] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:06.234 [2024-11-03 10:18:34.373193] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:06.234 [2024-11-03 10:18:34.373202] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:06.234 [2024-11-03 10:18:34.373210] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:06.234 [2024-11-03 10:18:34.373217] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373237] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373244] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:06.234 [2024-11-03 10:18:34.373253] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:06.234 [2024-11-03 10:18:34.373259] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:06.234 [2024-11-03 10:18:34.373267] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:06.234 [2024-11-03 10:18:34.373273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.234 [2024-11-03 10:18:34.373281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:06.234 [2024-11-03 10:18:34.373287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:24:06.234 [2024-11-03 10:18:34.373292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.234 [2024-11-03 10:18:34.373363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.234 [2024-11-03 10:18:34.373371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:06.234 [2024-11-03 10:18:34.373377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:06.234 [2024-11-03 10:18:34.373385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.234 [2024-11-03 10:18:34.373459] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:06.234 [2024-11-03 10:18:34.373468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:06.234 [2024-11-03 10:18:34.373478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:06.234 [2024-11-03 10:18:34.373495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:06.234 [2024-11-03 10:18:34.373512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:06.234 [2024-11-03 10:18:34.373522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:06.234 [2024-11-03 10:18:34.373531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:06.234 [2024-11-03 10:18:34.373536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:06.234 [2024-11-03 10:18:34.373541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:06.234 [2024-11-03 10:18:34.373546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:06.234 [2024-11-03 10:18:34.373551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:06.234 [2024-11-03 10:18:34.373561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:06.234 [2024-11-03 10:18:34.373577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:06.234 [2024-11-03 10:18:34.373591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:06.234 [2024-11-03 10:18:34.373606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:06.234 [2024-11-03 10:18:34.373625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:06.234 [2024-11-03 10:18:34.373640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:06.234 [2024-11-03 10:18:34.373651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:06.234 [2024-11-03 10:18:34.373656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:06.234 [2024-11-03 10:18:34.373660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:06.234 [2024-11-03 10:18:34.373665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:06.234 [2024-11-03 10:18:34.373670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:06.234 [2024-11-03 10:18:34.373675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:06.234 [2024-11-03 10:18:34.373688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:06.234 [2024-11-03 10:18:34.373693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373699] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:06.234 [2024-11-03 10:18:34.373705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:06.234 [2024-11-03 10:18:34.373712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.234 [2024-11-03 10:18:34.373723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:06.234 [2024-11-03 10:18:34.373728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:06.234 [2024-11-03 10:18:34.373733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:06.234 [2024-11-03 10:18:34.373738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:06.234 [2024-11-03 10:18:34.373743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:06.234 [2024-11-03 10:18:34.373748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:06.234 [2024-11-03 10:18:34.373754] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:06.234 [2024-11-03 10:18:34.373761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:06.234 [2024-11-03 10:18:34.373768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:06.234 [2024-11-03 10:18:34.373774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:06.234 [2024-11-03 10:18:34.373780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:06.234 [2024-11-03 10:18:34.373787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:06.234 [2024-11-03 10:18:34.373794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:06.234 [2024-11-03 10:18:34.373804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:06.234 [2024-11-03 10:18:34.373810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:06.234 [2024-11-03 10:18:34.373816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:06.234 [2024-11-03 10:18:34.373822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:06.234 [2024-11-03 10:18:34.373828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:06.234 [2024-11-03 10:18:34.373835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:06.234 [2024-11-03 10:18:34.373841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:06.234 [2024-11-03 10:18:34.373847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:06.234 [2024-11-03 10:18:34.373854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:06.234 [2024-11-03 10:18:34.373860] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:06.234 [2024-11-03 10:18:34.373867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:06.234 [2024-11-03 10:18:34.373874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:06.234 [2024-11-03 10:18:34.373881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:06.234 [2024-11-03 10:18:34.373887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:06.234 [2024-11-03 10:18:34.373894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:06.234 [2024-11-03 10:18:34.373902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.234 [2024-11-03 10:18:34.373909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:06.235 [2024-11-03 10:18:34.373918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:24:06.235 [2024-11-03 10:18:34.373926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.388869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.388974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.235 [2024-11-03 10:18:34.389033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.908 ms 00:24:06.235 [2024-11-03 10:18:34.389051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.389144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.389163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:06.235 [2024-11-03 10:18:34.389182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:06.235 [2024-11-03 10:18:34.389196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.397037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.397153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.235 [2024-11-03 10:18:34.397212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.779 ms 00:24:06.235 [2024-11-03 10:18:34.397346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.397406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.397488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.235 [2024-11-03 10:18:34.397517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:06.235 [2024-11-03 10:18:34.397565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.397941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.398033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.235 [2024-11-03 10:18:34.398093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:24:06.235 [2024-11-03 10:18:34.398161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.398328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.398357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.235 [2024-11-03 10:18:34.398422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:24:06.235 [2024-11-03 10:18:34.398448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.403075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.403182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.235 [2024-11-03 10:18:34.403254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.585 ms 00:24:06.235 [2024-11-03 10:18:34.403328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.405339] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:06.235 [2024-11-03 10:18:34.405356] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:06.235 [2024-11-03 10:18:34.405364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.405370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:06.235 [2024-11-03 10:18:34.405377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:24:06.235 [2024-11-03 10:18:34.405382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.416511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.416585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:06.235 [2024-11-03 10:18:34.416673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.101 ms 00:24:06.235 [2024-11-03 10:18:34.416691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.418194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.418285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:06.235 [2024-11-03 10:18:34.418351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:24:06.235 [2024-11-03 10:18:34.418368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.419564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.419636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:06.235 [2024-11-03 10:18:34.419681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:24:06.235 [2024-11-03 10:18:34.419698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.419942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.420002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:06.235 [2024-11-03 10:18:34.420049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:24:06.235 [2024-11-03 10:18:34.420066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.433755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.433858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:06.235 [2024-11-03 10:18:34.433900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.666 ms 00:24:06.235 [2024-11-03 10:18:34.433918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.439629] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:06.235 [2024-11-03 10:18:34.441498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.441568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:06.235 [2024-11-03 10:18:34.441607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.547 ms 00:24:06.235 [2024-11-03 10:18:34.441631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.441680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.441741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:06.235 [2024-11-03 10:18:34.441762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:06.235 [2024-11-03 10:18:34.441776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.441849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.441954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:06.235 [2024-11-03 10:18:34.441972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:06.235 [2024-11-03 10:18:34.441986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.442018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.442034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:06.235 [2024-11-03 10:18:34.442085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:06.235 [2024-11-03 10:18:34.442101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.442135] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:06.235 [2024-11-03 10:18:34.442155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.442170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:06.235 [2024-11-03 10:18:34.442186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:06.235 [2024-11-03 10:18:34.442231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.444953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.445034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:06.235 [2024-11-03 10:18:34.445071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.693 ms 00:24:06.235 [2024-11-03 10:18:34.445089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.445149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.235 [2024-11-03 10:18:34.445243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:06.235 [2024-11-03 10:18:34.445269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:06.235 [2024-11-03 10:18:34.445285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.235 [2024-11-03 10:18:34.446065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 82.931 ms, result 0 00:24:07.179  [2024-11-03T10:18:36.484Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-03T10:18:37.871Z] Copying: 39/1024 [MB] (18 MBps) [2024-11-03T10:18:38.483Z] Copying: 54/1024 [MB] (15 MBps) [2024-11-03T10:18:39.867Z] Copying: 76/1024 [MB] (21 MBps) [2024-11-03T10:18:40.810Z] Copying: 95/1024 [MB] (19 MBps) [2024-11-03T10:18:41.755Z] Copying: 110/1024 [MB] (14 MBps) [2024-11-03T10:18:42.698Z] Copying: 122/1024 [MB] (11 MBps) [2024-11-03T10:18:43.642Z] Copying: 132/1024 [MB] (10 MBps) [2024-11-03T10:18:44.585Z] Copying: 143/1024 [MB] (10 MBps) [2024-11-03T10:18:45.530Z] Copying: 153/1024 [MB] (10 MBps) [2024-11-03T10:18:46.475Z] Copying: 165/1024 [MB] (11 MBps) [2024-11-03T10:18:47.864Z] Copying: 176/1024 [MB] (11 MBps) [2024-11-03T10:18:48.807Z] Copying: 187/1024 [MB] (10 MBps) [2024-11-03T10:18:49.750Z] Copying: 197/1024 [MB] (10 MBps) [2024-11-03T10:18:50.696Z] Copying: 231/1024 [MB] (33 MBps) [2024-11-03T10:18:51.639Z] Copying: 251/1024 [MB] (20 MBps) [2024-11-03T10:18:52.579Z] Copying: 264/1024 [MB] (13 MBps) [2024-11-03T10:18:53.522Z] Copying: 291/1024 [MB] (26 MBps) [2024-11-03T10:18:54.465Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-03T10:18:55.876Z] Copying: 318/1024 [MB] (16 MBps) [2024-11-03T10:18:56.830Z] Copying: 337/1024 [MB] (18 MBps) [2024-11-03T10:18:57.774Z] Copying: 353/1024 [MB] (16 MBps) [2024-11-03T10:18:58.718Z] Copying: 367/1024 [MB] (13 MBps) [2024-11-03T10:18:59.662Z] Copying: 382/1024 [MB] (15 MBps) [2024-11-03T10:19:00.605Z] Copying: 396/1024 [MB] (13 MBps) [2024-11-03T10:19:01.550Z] Copying: 429/1024 [MB] (32 MBps) [2024-11-03T10:19:02.495Z] Copying: 444/1024 [MB] (15 MBps) [2024-11-03T10:19:03.881Z] Copying: 474/1024 [MB] (29 MBps) [2024-11-03T10:19:04.825Z] Copying: 493/1024 [MB] (19 MBps) [2024-11-03T10:19:05.768Z] Copying: 524/1024 [MB] (31 MBps) [2024-11-03T10:19:06.710Z] Copying: 542/1024 [MB] (18 MBps) [2024-11-03T10:19:07.654Z] Copying: 564/1024 [MB] (21 MBps) [2024-11-03T10:19:08.711Z] Copying: 577/1024 [MB] (13 MBps) [2024-11-03T10:19:09.656Z] Copying: 593/1024 [MB] (16 MBps) [2024-11-03T10:19:10.599Z] Copying: 616/1024 [MB] (22 MBps) [2024-11-03T10:19:11.536Z] Copying: 635/1024 [MB] (18 MBps) [2024-11-03T10:19:12.474Z] Copying: 653/1024 [MB] (18 MBps) [2024-11-03T10:19:13.859Z] Copying: 669/1024 [MB] (16 MBps) [2024-11-03T10:19:14.800Z] Copying: 686/1024 [MB] (16 MBps) [2024-11-03T10:19:15.738Z] Copying: 703/1024 [MB] (16 MBps) [2024-11-03T10:19:16.677Z] Copying: 719/1024 [MB] (16 MBps) [2024-11-03T10:19:17.618Z] Copying: 740/1024 [MB] (20 MBps) [2024-11-03T10:19:18.555Z] Copying: 759/1024 [MB] (18 MBps) [2024-11-03T10:19:19.496Z] Copying: 778/1024 [MB] (19 MBps) [2024-11-03T10:19:20.897Z] Copying: 790/1024 [MB] (11 MBps) [2024-11-03T10:19:21.464Z] Copying: 819320/1048576 [kB] (10124 kBps) [2024-11-03T10:19:22.838Z] Copying: 813/1024 [MB] (13 MBps) [2024-11-03T10:19:23.772Z] Copying: 828/1024 [MB] (14 MBps) [2024-11-03T10:19:24.714Z] Copying: 842/1024 [MB] (14 MBps) [2024-11-03T10:19:25.654Z] Copying: 854/1024 [MB] (11 MBps) [2024-11-03T10:19:26.595Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-03T10:19:27.529Z] Copying: 878/1024 [MB] (13 MBps) [2024-11-03T10:19:28.462Z] Copying: 891/1024 [MB] (12 MBps) [2024-11-03T10:19:29.838Z] Copying: 904/1024 [MB] (13 MBps) [2024-11-03T10:19:30.782Z] Copying: 917/1024 [MB] (13 MBps) [2024-11-03T10:19:31.725Z] Copying: 928/1024 [MB] (10 MBps) [2024-11-03T10:19:32.668Z] Copying: 938/1024 [MB] (10 MBps) [2024-11-03T10:19:33.609Z] Copying: 974/1024 [MB] (36 MBps) [2024-11-03T10:19:34.552Z] Copying: 1015/1024 [MB] (41 MBps) [2024-11-03T10:19:34.815Z] Copying: 1048440/1048576 [kB] (8252 kBps) [2024-11-03T10:19:34.815Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-03 10:19:34.576459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.453 [2024-11-03 10:19:34.576513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:06.453 [2024-11-03 10:19:34.576525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:06.453 [2024-11-03 10:19:34.576532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.453 [2024-11-03 10:19:34.577906] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:06.453 [2024-11-03 10:19:34.582144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.453 [2024-11-03 10:19:34.582173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:06.453 [2024-11-03 10:19:34.582181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:25:06.453 [2024-11-03 10:19:34.582193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.453 [2024-11-03 10:19:34.589303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.453 [2024-11-03 10:19:34.589330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:06.453 [2024-11-03 10:19:34.589339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.700 ms 00:25:06.453 [2024-11-03 10:19:34.589345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.453 [2024-11-03 10:19:34.606345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.453 [2024-11-03 10:19:34.606374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:06.454 [2024-11-03 10:19:34.606387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.988 ms 00:25:06.454 [2024-11-03 10:19:34.606393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.611159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-11-03 10:19:34.611182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:06.454 [2024-11-03 10:19:34.611191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:25:06.454 [2024-11-03 10:19:34.611198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.612358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-11-03 10:19:34.612384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:06.454 [2024-11-03 10:19:34.612392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:25:06.454 [2024-11-03 10:19:34.612397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.615568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-11-03 10:19:34.615597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:06.454 [2024-11-03 10:19:34.615609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:25:06.454 [2024-11-03 10:19:34.615615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.689143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-11-03 10:19:34.689183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:06.454 [2024-11-03 10:19:34.689192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.502 ms 00:25:06.454 [2024-11-03 10:19:34.689198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.690939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-11-03 10:19:34.691049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:06.454 [2024-11-03 10:19:34.691062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:25:06.454 [2024-11-03 10:19:34.691067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.692536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-11-03 10:19:34.692560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:06.454 [2024-11-03 10:19:34.692567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:25:06.454 [2024-11-03 10:19:34.692572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.693823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-11-03 10:19:34.693850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:06.454 [2024-11-03 10:19:34.693857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:25:06.454 [2024-11-03 10:19:34.693862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.694982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.454 [2024-11-03 10:19:34.695010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:06.454 [2024-11-03 10:19:34.695017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:25:06.454 [2024-11-03 10:19:34.695023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.454 [2024-11-03 10:19:34.695045] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:06.454 [2024-11-03 10:19:34.695056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115968 / 261120 wr_cnt: 1 state: open 00:25:06.454 [2024-11-03 10:19:34.695064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:06.454 [2024-11-03 10:19:34.695530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:06.455 [2024-11-03 10:19:34.695765] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:06.455 [2024-11-03 10:19:34.695773] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e441cb6-b29b-4f7b-bae7-171e68ff8085 00:25:06.455 [2024-11-03 10:19:34.695782] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115968 00:25:06.455 [2024-11-03 10:19:34.695787] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116928 00:25:06.455 [2024-11-03 10:19:34.695793] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115968 00:25:06.455 [2024-11-03 10:19:34.695799] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:25:06.455 [2024-11-03 10:19:34.695805] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:06.455 [2024-11-03 10:19:34.695811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:06.455 [2024-11-03 10:19:34.695817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:06.455 [2024-11-03 10:19:34.695822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:06.455 [2024-11-03 10:19:34.695827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:06.455 [2024-11-03 10:19:34.695832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.455 [2024-11-03 10:19:34.695839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:06.455 [2024-11-03 10:19:34.695845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:25:06.455 [2024-11-03 10:19:34.695851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.697095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.455 [2024-11-03 10:19:34.697114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:06.455 [2024-11-03 10:19:34.697122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:25:06.455 [2024-11-03 10:19:34.697128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.697196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.455 [2024-11-03 10:19:34.697203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:06.455 [2024-11-03 10:19:34.697212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:06.455 [2024-11-03 10:19:34.697217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.701022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.701114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.455 [2024-11-03 10:19:34.701156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.701173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.701215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.701222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.455 [2024-11-03 10:19:34.701255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.701261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.701290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.701301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.455 [2024-11-03 10:19:34.701307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.701312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.701332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.701339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.455 [2024-11-03 10:19:34.701348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.701355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.708710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.708740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.455 [2024-11-03 10:19:34.708748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.708754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.714765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.714795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:06.455 [2024-11-03 10:19:34.714808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.714814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.714838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.714844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.455 [2024-11-03 10:19:34.714850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.714856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.714888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.714894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.455 [2024-11-03 10:19:34.714901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.714906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.714955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.714962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.455 [2024-11-03 10:19:34.714968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.714977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.714997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.455 [2024-11-03 10:19:34.715006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:06.455 [2024-11-03 10:19:34.715012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.455 [2024-11-03 10:19:34.715018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.455 [2024-11-03 10:19:34.715049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.456 [2024-11-03 10:19:34.715056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.456 [2024-11-03 10:19:34.715062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.456 [2024-11-03 10:19:34.715068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.456 [2024-11-03 10:19:34.715100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.456 [2024-11-03 10:19:34.715107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.456 [2024-11-03 10:19:34.715114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.456 [2024-11-03 10:19:34.715119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.456 [2024-11-03 10:19:34.715215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 140.201 ms, result 0 00:25:07.884 00:25:07.884 00:25:07.884 10:19:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:09.822 10:19:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:09.822 [2024-11-03 10:19:37.892840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:09.822 [2024-11-03 10:19:37.892957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90542 ] 00:25:09.822 [2024-11-03 10:19:38.025894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.822 [2024-11-03 10:19:38.070452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.822 [2024-11-03 10:19:38.180604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.822 [2024-11-03 10:19:38.180873] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:10.084 [2024-11-03 10:19:38.343095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.084 [2024-11-03 10:19:38.343356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:10.084 [2024-11-03 10:19:38.343389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:10.084 [2024-11-03 10:19:38.343398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.084 [2024-11-03 10:19:38.343480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.084 [2024-11-03 10:19:38.343492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.084 [2024-11-03 10:19:38.343502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:10.084 [2024-11-03 10:19:38.343516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.084 [2024-11-03 10:19:38.343539] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:10.085 [2024-11-03 10:19:38.343817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:10.085 [2024-11-03 10:19:38.343834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.343843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.085 [2024-11-03 10:19:38.343853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:25:10.085 [2024-11-03 10:19:38.343860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.345715] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:10.085 [2024-11-03 10:19:38.349655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.349716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:10.085 [2024-11-03 10:19:38.349728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.944 ms 00:25:10.085 [2024-11-03 10:19:38.349736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.349818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.349835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:10.085 [2024-11-03 10:19:38.349843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:10.085 [2024-11-03 10:19:38.349851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.358085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.358301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.085 [2024-11-03 10:19:38.358320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.191 ms 00:25:10.085 [2024-11-03 10:19:38.358340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.358453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.358464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.085 [2024-11-03 10:19:38.358473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:10.085 [2024-11-03 10:19:38.358481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.358538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.358553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:10.085 [2024-11-03 10:19:38.358566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:10.085 [2024-11-03 10:19:38.358579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.358600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:10.085 [2024-11-03 10:19:38.360655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.360690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.085 [2024-11-03 10:19:38.360701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:25:10.085 [2024-11-03 10:19:38.360714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.360750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.360758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:10.085 [2024-11-03 10:19:38.360767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:10.085 [2024-11-03 10:19:38.360774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.360797] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:10.085 [2024-11-03 10:19:38.360827] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:10.085 [2024-11-03 10:19:38.360868] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:10.085 [2024-11-03 10:19:38.360888] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:10.085 [2024-11-03 10:19:38.360994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:10.085 [2024-11-03 10:19:38.361005] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:10.085 [2024-11-03 10:19:38.361016] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:10.085 [2024-11-03 10:19:38.361031] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361049] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361058] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:10.085 [2024-11-03 10:19:38.361066] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:10.085 [2024-11-03 10:19:38.361078] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:10.085 [2024-11-03 10:19:38.361086] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:10.085 [2024-11-03 10:19:38.361095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.361102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:10.085 [2024-11-03 10:19:38.361110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:25:10.085 [2024-11-03 10:19:38.361118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.361200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.085 [2024-11-03 10:19:38.361209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:10.085 [2024-11-03 10:19:38.361220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:10.085 [2024-11-03 10:19:38.361247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.085 [2024-11-03 10:19:38.361346] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:10.085 [2024-11-03 10:19:38.361357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:10.085 [2024-11-03 10:19:38.361370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:10.085 [2024-11-03 10:19:38.361403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:10.085 [2024-11-03 10:19:38.361428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.085 [2024-11-03 10:19:38.361461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:10.085 [2024-11-03 10:19:38.361469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:10.085 [2024-11-03 10:19:38.361486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.085 [2024-11-03 10:19:38.361493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:10.085 [2024-11-03 10:19:38.361501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:10.085 [2024-11-03 10:19:38.361509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:10.085 [2024-11-03 10:19:38.361525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:10.085 [2024-11-03 10:19:38.361555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:10.085 [2024-11-03 10:19:38.361580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:10.085 [2024-11-03 10:19:38.361604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:10.085 [2024-11-03 10:19:38.361627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:10.085 [2024-11-03 10:19:38.361651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.085 [2024-11-03 10:19:38.361669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:10.085 [2024-11-03 10:19:38.361677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:10.085 [2024-11-03 10:19:38.361684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.085 [2024-11-03 10:19:38.361693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:10.085 [2024-11-03 10:19:38.361700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:10.085 [2024-11-03 10:19:38.361707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:10.085 [2024-11-03 10:19:38.361720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:10.085 [2024-11-03 10:19:38.361726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.085 [2024-11-03 10:19:38.361733] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:10.085 [2024-11-03 10:19:38.361741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:10.085 [2024-11-03 10:19:38.361751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.085 [2024-11-03 10:19:38.361761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.086 [2024-11-03 10:19:38.361769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:10.086 [2024-11-03 10:19:38.361775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:10.086 [2024-11-03 10:19:38.361781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:10.086 [2024-11-03 10:19:38.361790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:10.086 [2024-11-03 10:19:38.361797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:10.086 [2024-11-03 10:19:38.361805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:10.086 [2024-11-03 10:19:38.361814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:10.086 [2024-11-03 10:19:38.361824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.086 [2024-11-03 10:19:38.361833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:10.086 [2024-11-03 10:19:38.361841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:10.086 [2024-11-03 10:19:38.361849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:10.086 [2024-11-03 10:19:38.361856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:10.086 [2024-11-03 10:19:38.361864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:10.086 [2024-11-03 10:19:38.361871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:10.086 [2024-11-03 10:19:38.361878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:10.086 [2024-11-03 10:19:38.361885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:10.086 [2024-11-03 10:19:38.361893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:10.086 [2024-11-03 10:19:38.361906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:10.086 [2024-11-03 10:19:38.361913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:10.086 [2024-11-03 10:19:38.361923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:10.086 [2024-11-03 10:19:38.361930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:10.086 [2024-11-03 10:19:38.361938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:10.086 [2024-11-03 10:19:38.361945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:10.086 [2024-11-03 10:19:38.361956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.086 [2024-11-03 10:19:38.361965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:10.086 [2024-11-03 10:19:38.361973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:10.086 [2024-11-03 10:19:38.361980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:10.086 [2024-11-03 10:19:38.361987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:10.086 [2024-11-03 10:19:38.361995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.362003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:10.086 [2024-11-03 10:19:38.362011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:25:10.086 [2024-11-03 10:19:38.362018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.388095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.388156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.086 [2024-11-03 10:19:38.388170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.032 ms 00:25:10.086 [2024-11-03 10:19:38.388179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.388332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.388352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:10.086 [2024-11-03 10:19:38.388362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:10.086 [2024-11-03 10:19:38.388374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.401318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.401361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.086 [2024-11-03 10:19:38.401372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.873 ms 00:25:10.086 [2024-11-03 10:19:38.401380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.401420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.401429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.086 [2024-11-03 10:19:38.401437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:10.086 [2024-11-03 10:19:38.401445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.401969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.402002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.086 [2024-11-03 10:19:38.402013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:25:10.086 [2024-11-03 10:19:38.402021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.402170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.402187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.086 [2024-11-03 10:19:38.402202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:25:10.086 [2024-11-03 10:19:38.402210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.408800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.408843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.086 [2024-11-03 10:19:38.408860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.549 ms 00:25:10.086 [2024-11-03 10:19:38.408868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.412554] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:10.086 [2024-11-03 10:19:38.412600] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:10.086 [2024-11-03 10:19:38.412620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.412628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:10.086 [2024-11-03 10:19:38.412637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:25:10.086 [2024-11-03 10:19:38.412644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.428711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.428753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:10.086 [2024-11-03 10:19:38.428772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.017 ms 00:25:10.086 [2024-11-03 10:19:38.428786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.431368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.431415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:10.086 [2024-11-03 10:19:38.431425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:25:10.086 [2024-11-03 10:19:38.431433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.434159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.434364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:10.086 [2024-11-03 10:19:38.434385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:25:10.086 [2024-11-03 10:19:38.434392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.086 [2024-11-03 10:19:38.434726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.086 [2024-11-03 10:19:38.434739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:10.086 [2024-11-03 10:19:38.434749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:25:10.086 [2024-11-03 10:19:38.434764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.459831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.348 [2024-11-03 10:19:38.459899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:10.348 [2024-11-03 10:19:38.459913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.050 ms 00:25:10.348 [2024-11-03 10:19:38.459921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.468091] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:10.348 [2024-11-03 10:19:38.470964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.348 [2024-11-03 10:19:38.471011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:10.348 [2024-11-03 10:19:38.471026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.992 ms 00:25:10.348 [2024-11-03 10:19:38.471035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.471110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.348 [2024-11-03 10:19:38.471126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:10.348 [2024-11-03 10:19:38.471136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:10.348 [2024-11-03 10:19:38.471144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.472968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.348 [2024-11-03 10:19:38.473011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:10.348 [2024-11-03 10:19:38.473022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:25:10.348 [2024-11-03 10:19:38.473034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.473066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.348 [2024-11-03 10:19:38.473080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:10.348 [2024-11-03 10:19:38.473092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:10.348 [2024-11-03 10:19:38.473100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.473137] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:10.348 [2024-11-03 10:19:38.473152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.348 [2024-11-03 10:19:38.473160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:10.348 [2024-11-03 10:19:38.473169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:10.348 [2024-11-03 10:19:38.473176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.478718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.348 [2024-11-03 10:19:38.478766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.348 [2024-11-03 10:19:38.478778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.519 ms 00:25:10.348 [2024-11-03 10:19:38.478786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.478884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.348 [2024-11-03 10:19:38.478899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.348 [2024-11-03 10:19:38.478908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:10.348 [2024-11-03 10:19:38.478916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.348 [2024-11-03 10:19:38.480287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.673 ms, result 0 00:25:11.735  [2024-11-03T10:19:40.668Z] Copying: 1056/1048576 [kB] (1056 kBps) [2024-11-03T10:19:42.056Z] Copying: 4884/1048576 [kB] (3828 kBps) [2024-11-03T10:19:42.999Z] Copying: 26/1024 [MB] (21 MBps) [2024-11-03T10:19:43.942Z] Copying: 49/1024 [MB] (23 MBps) [2024-11-03T10:19:44.886Z] Copying: 86/1024 [MB] (36 MBps) [2024-11-03T10:19:45.830Z] Copying: 114/1024 [MB] (28 MBps) [2024-11-03T10:19:46.773Z] Copying: 143/1024 [MB] (29 MBps) [2024-11-03T10:19:47.716Z] Copying: 167/1024 [MB] (23 MBps) [2024-11-03T10:19:48.675Z] Copying: 189/1024 [MB] (22 MBps) [2024-11-03T10:19:50.060Z] Copying: 208/1024 [MB] (19 MBps) [2024-11-03T10:19:51.003Z] Copying: 236/1024 [MB] (28 MBps) [2024-11-03T10:19:51.949Z] Copying: 255/1024 [MB] (18 MBps) [2024-11-03T10:19:52.892Z] Copying: 277/1024 [MB] (22 MBps) [2024-11-03T10:19:53.900Z] Copying: 303/1024 [MB] (25 MBps) [2024-11-03T10:19:54.844Z] Copying: 322/1024 [MB] (18 MBps) [2024-11-03T10:19:55.787Z] Copying: 343/1024 [MB] (21 MBps) [2024-11-03T10:19:56.730Z] Copying: 375/1024 [MB] (32 MBps) [2024-11-03T10:19:57.675Z] Copying: 409/1024 [MB] (33 MBps) [2024-11-03T10:19:59.064Z] Copying: 427/1024 [MB] (18 MBps) [2024-11-03T10:20:00.007Z] Copying: 448/1024 [MB] (20 MBps) [2024-11-03T10:20:00.948Z] Copying: 466/1024 [MB] (18 MBps) [2024-11-03T10:20:01.892Z] Copying: 493/1024 [MB] (26 MBps) [2024-11-03T10:20:02.837Z] Copying: 516/1024 [MB] (23 MBps) [2024-11-03T10:20:03.781Z] Copying: 541/1024 [MB] (25 MBps) [2024-11-03T10:20:04.724Z] Copying: 566/1024 [MB] (24 MBps) [2024-11-03T10:20:05.670Z] Copying: 591/1024 [MB] (25 MBps) [2024-11-03T10:20:07.057Z] Copying: 609/1024 [MB] (18 MBps) [2024-11-03T10:20:08.000Z] Copying: 629/1024 [MB] (19 MBps) [2024-11-03T10:20:08.945Z] Copying: 653/1024 [MB] (24 MBps) [2024-11-03T10:20:09.889Z] Copying: 673/1024 [MB] (20 MBps) [2024-11-03T10:20:10.833Z] Copying: 689/1024 [MB] (15 MBps) [2024-11-03T10:20:11.779Z] Copying: 709/1024 [MB] (20 MBps) [2024-11-03T10:20:12.723Z] Copying: 740/1024 [MB] (30 MBps) [2024-11-03T10:20:13.667Z] Copying: 769/1024 [MB] (28 MBps) [2024-11-03T10:20:15.092Z] Copying: 800/1024 [MB] (30 MBps) [2024-11-03T10:20:15.670Z] Copying: 821/1024 [MB] (21 MBps) [2024-11-03T10:20:17.056Z] Copying: 848/1024 [MB] (26 MBps) [2024-11-03T10:20:17.999Z] Copying: 877/1024 [MB] (29 MBps) [2024-11-03T10:20:18.941Z] Copying: 904/1024 [MB] (27 MBps) [2024-11-03T10:20:19.885Z] Copying: 931/1024 [MB] (27 MBps) [2024-11-03T10:20:20.828Z] Copying: 962/1024 [MB] (31 MBps) [2024-11-03T10:20:21.772Z] Copying: 983/1024 [MB] (20 MBps) [2024-11-03T10:20:22.344Z] Copying: 1005/1024 [MB] (22 MBps) [2024-11-03T10:20:22.605Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-03 10:20:22.558732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.558845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:54.243 [2024-11-03 10:20:22.558880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:54.243 [2024-11-03 10:20:22.558902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.558958] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:54.243 [2024-11-03 10:20:22.560164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.560309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:54.243 [2024-11-03 10:20:22.560337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:25:54.243 [2024-11-03 10:20:22.560352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.560652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.560668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:54.243 [2024-11-03 10:20:22.560680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:25:54.243 [2024-11-03 10:20:22.560691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.575367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.575418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:54.243 [2024-11-03 10:20:22.575430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.656 ms 00:25:54.243 [2024-11-03 10:20:22.575448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.581813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.581860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:54.243 [2024-11-03 10:20:22.581877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.334 ms 00:25:54.243 [2024-11-03 10:20:22.581886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.584655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.584703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:54.243 [2024-11-03 10:20:22.584714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:25:54.243 [2024-11-03 10:20:22.584723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.590141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.590369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:54.243 [2024-11-03 10:20:22.590392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.374 ms 00:25:54.243 [2024-11-03 10:20:22.590400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.595084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.595210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:54.243 [2024-11-03 10:20:22.595302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.629 ms 00:25:54.243 [2024-11-03 10:20:22.595328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.598625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.598777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:54.243 [2024-11-03 10:20:22.598838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:25:54.243 [2024-11-03 10:20:22.598861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.601729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.601928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:54.243 [2024-11-03 10:20:22.602002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.744 ms 00:25:54.243 [2024-11-03 10:20:22.602028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.243 [2024-11-03 10:20:22.603660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.243 [2024-11-03 10:20:22.603806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:54.243 [2024-11-03 10:20:22.603860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:25:54.243 [2024-11-03 10:20:22.603882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.506 [2024-11-03 10:20:22.605709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.506 [2024-11-03 10:20:22.605865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:54.506 [2024-11-03 10:20:22.605922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.665 ms 00:25:54.506 [2024-11-03 10:20:22.605943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.506 [2024-11-03 10:20:22.605985] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:54.506 [2024-11-03 10:20:22.606013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:54.506 [2024-11-03 10:20:22.606048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:54.506 [2024-11-03 10:20:22.606080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.606997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.607956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.608969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.609002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.609064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.609111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.609142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.609201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.609244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.609306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:54.506 [2024-11-03 10:20:22.609341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.609997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:54.507 [2024-11-03 10:20:22.610154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:54.507 [2024-11-03 10:20:22.610163] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e441cb6-b29b-4f7b-bae7-171e68ff8085 00:25:54.507 [2024-11-03 10:20:22.610173] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:54.507 [2024-11-03 10:20:22.610182] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 148672 00:25:54.507 [2024-11-03 10:20:22.610207] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 146688 00:25:54.507 [2024-11-03 10:20:22.610217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0135 00:25:54.507 [2024-11-03 10:20:22.610245] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:54.507 [2024-11-03 10:20:22.610255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:54.507 [2024-11-03 10:20:22.610267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:54.507 [2024-11-03 10:20:22.610274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:54.507 [2024-11-03 10:20:22.610282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:54.507 [2024-11-03 10:20:22.610292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.507 [2024-11-03 10:20:22.610301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:54.507 [2024-11-03 10:20:22.610310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.307 ms 00:25:54.507 [2024-11-03 10:20:22.610320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.612691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.507 [2024-11-03 10:20:22.612831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:54.507 [2024-11-03 10:20:22.612906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.339 ms 00:25:54.507 [2024-11-03 10:20:22.612931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.613089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.507 [2024-11-03 10:20:22.613174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:54.507 [2024-11-03 10:20:22.613249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:54.507 [2024-11-03 10:20:22.613273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.619532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.507 [2024-11-03 10:20:22.619674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:54.507 [2024-11-03 10:20:22.619731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.507 [2024-11-03 10:20:22.619753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.619847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.507 [2024-11-03 10:20:22.619873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:54.507 [2024-11-03 10:20:22.619969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.507 [2024-11-03 10:20:22.619994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.620063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.507 [2024-11-03 10:20:22.620184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:54.507 [2024-11-03 10:20:22.620219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.507 [2024-11-03 10:20:22.620319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.620356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.507 [2024-11-03 10:20:22.620379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:54.507 [2024-11-03 10:20:22.620453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.507 [2024-11-03 10:20:22.620476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.632450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.507 [2024-11-03 10:20:22.632619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:54.507 [2024-11-03 10:20:22.632676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.507 [2024-11-03 10:20:22.632717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.642364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.507 [2024-11-03 10:20:22.642520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:54.507 [2024-11-03 10:20:22.642571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.507 [2024-11-03 10:20:22.642593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.642645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.507 [2024-11-03 10:20:22.642662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.507 [2024-11-03 10:20:22.642672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.507 [2024-11-03 10:20:22.642685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.642717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.507 [2024-11-03 10:20:22.642726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.507 [2024-11-03 10:20:22.642734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.507 [2024-11-03 10:20:22.642743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.507 [2024-11-03 10:20:22.642819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.508 [2024-11-03 10:20:22.642828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.508 [2024-11-03 10:20:22.642842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.508 [2024-11-03 10:20:22.642850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.508 [2024-11-03 10:20:22.642889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.508 [2024-11-03 10:20:22.642904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:54.508 [2024-11-03 10:20:22.642913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.508 [2024-11-03 10:20:22.642921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.508 [2024-11-03 10:20:22.642965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.508 [2024-11-03 10:20:22.642975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.508 [2024-11-03 10:20:22.642987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.508 [2024-11-03 10:20:22.642995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.508 [2024-11-03 10:20:22.643041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.508 [2024-11-03 10:20:22.643052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.508 [2024-11-03 10:20:22.643064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.508 [2024-11-03 10:20:22.643074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.508 [2024-11-03 10:20:22.643209] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.470 ms, result 0 00:25:54.508 00:25:54.508 00:25:54.768 10:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.683 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:56.683 10:20:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.944 [2024-11-03 10:20:25.050513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:56.944 [2024-11-03 10:20:25.050757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91029 ] 00:25:56.944 [2024-11-03 10:20:25.187933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.944 [2024-11-03 10:20:25.233328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.206 [2024-11-03 10:20:25.346564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.206 [2024-11-03 10:20:25.346890] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.206 [2024-11-03 10:20:25.507349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.206 [2024-11-03 10:20:25.507403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:57.206 [2024-11-03 10:20:25.507421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.206 [2024-11-03 10:20:25.507430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.206 [2024-11-03 10:20:25.507487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.206 [2024-11-03 10:20:25.507499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.206 [2024-11-03 10:20:25.507508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:57.206 [2024-11-03 10:20:25.507521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.206 [2024-11-03 10:20:25.507545] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:57.206 [2024-11-03 10:20:25.507817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:57.206 [2024-11-03 10:20:25.507837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.206 [2024-11-03 10:20:25.507846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.206 [2024-11-03 10:20:25.507858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:25:57.206 [2024-11-03 10:20:25.507866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.206 [2024-11-03 10:20:25.509702] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:57.206 [2024-11-03 10:20:25.513535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.513579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:57.207 [2024-11-03 10:20:25.513591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:25:57.207 [2024-11-03 10:20:25.513607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.513678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.513693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:57.207 [2024-11-03 10:20:25.513702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:57.207 [2024-11-03 10:20:25.513709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.521610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.521652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.207 [2024-11-03 10:20:25.521663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.856 ms 00:25:57.207 [2024-11-03 10:20:25.521671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.521767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.521777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.207 [2024-11-03 10:20:25.521786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:57.207 [2024-11-03 10:20:25.521796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.521854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.521865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:57.207 [2024-11-03 10:20:25.521875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:57.207 [2024-11-03 10:20:25.521883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.521905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.207 [2024-11-03 10:20:25.523935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.523971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.207 [2024-11-03 10:20:25.523982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:25:57.207 [2024-11-03 10:20:25.523990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.524028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.524041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:57.207 [2024-11-03 10:20:25.524050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:57.207 [2024-11-03 10:20:25.524058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.524083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:57.207 [2024-11-03 10:20:25.524108] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:57.207 [2024-11-03 10:20:25.524149] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:57.207 [2024-11-03 10:20:25.524164] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:57.207 [2024-11-03 10:20:25.524302] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:57.207 [2024-11-03 10:20:25.524314] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:57.207 [2024-11-03 10:20:25.524326] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:57.207 [2024-11-03 10:20:25.524337] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524354] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524362] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:57.207 [2024-11-03 10:20:25.524370] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:57.207 [2024-11-03 10:20:25.524378] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:57.207 [2024-11-03 10:20:25.524386] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:57.207 [2024-11-03 10:20:25.524394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.524401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:57.207 [2024-11-03 10:20:25.524409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:25:57.207 [2024-11-03 10:20:25.524419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.524502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.207 [2024-11-03 10:20:25.524520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:57.207 [2024-11-03 10:20:25.524530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:57.207 [2024-11-03 10:20:25.524541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.207 [2024-11-03 10:20:25.524640] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:57.207 [2024-11-03 10:20:25.524651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:57.207 [2024-11-03 10:20:25.524661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:57.207 [2024-11-03 10:20:25.524690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:57.207 [2024-11-03 10:20:25.524718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.207 [2024-11-03 10:20:25.524737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:57.207 [2024-11-03 10:20:25.524746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:57.207 [2024-11-03 10:20:25.524755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.207 [2024-11-03 10:20:25.524763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:57.207 [2024-11-03 10:20:25.524771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:57.207 [2024-11-03 10:20:25.524778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:57.207 [2024-11-03 10:20:25.524797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:57.207 [2024-11-03 10:20:25.524820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:57.207 [2024-11-03 10:20:25.524843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:57.207 [2024-11-03 10:20:25.524875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:57.207 [2024-11-03 10:20:25.524899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.207 [2024-11-03 10:20:25.524914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:57.207 [2024-11-03 10:20:25.524922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.207 [2024-11-03 10:20:25.524939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:57.207 [2024-11-03 10:20:25.524946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:57.207 [2024-11-03 10:20:25.524954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.207 [2024-11-03 10:20:25.524962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:57.207 [2024-11-03 10:20:25.524972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:57.207 [2024-11-03 10:20:25.524979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.207 [2024-11-03 10:20:25.524986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:57.207 [2024-11-03 10:20:25.524995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:57.207 [2024-11-03 10:20:25.525004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.207 [2024-11-03 10:20:25.525011] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:57.207 [2024-11-03 10:20:25.525019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:57.207 [2024-11-03 10:20:25.525027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.207 [2024-11-03 10:20:25.525039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.207 [2024-11-03 10:20:25.525047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:57.207 [2024-11-03 10:20:25.525056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:57.207 [2024-11-03 10:20:25.525065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:57.207 [2024-11-03 10:20:25.525072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:57.207 [2024-11-03 10:20:25.525078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:57.207 [2024-11-03 10:20:25.525085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:57.207 [2024-11-03 10:20:25.525093] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:57.207 [2024-11-03 10:20:25.525102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.207 [2024-11-03 10:20:25.525112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:57.208 [2024-11-03 10:20:25.525119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:57.208 [2024-11-03 10:20:25.525126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:57.208 [2024-11-03 10:20:25.525136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:57.208 [2024-11-03 10:20:25.525143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:57.208 [2024-11-03 10:20:25.525150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:57.208 [2024-11-03 10:20:25.525158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:57.208 [2024-11-03 10:20:25.525165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:57.208 [2024-11-03 10:20:25.525173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:57.208 [2024-11-03 10:20:25.525186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:57.208 [2024-11-03 10:20:25.525193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:57.208 [2024-11-03 10:20:25.525200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:57.208 [2024-11-03 10:20:25.525208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:57.208 [2024-11-03 10:20:25.525215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:57.208 [2024-11-03 10:20:25.525237] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:57.208 [2024-11-03 10:20:25.525248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.208 [2024-11-03 10:20:25.525256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:57.208 [2024-11-03 10:20:25.525264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:57.208 [2024-11-03 10:20:25.525273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:57.208 [2024-11-03 10:20:25.525283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:57.208 [2024-11-03 10:20:25.525291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.208 [2024-11-03 10:20:25.525299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:57.208 [2024-11-03 10:20:25.525308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:25:57.208 [2024-11-03 10:20:25.525316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.208 [2024-11-03 10:20:25.551105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.208 [2024-11-03 10:20:25.551180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.208 [2024-11-03 10:20:25.551202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.736 ms 00:25:57.208 [2024-11-03 10:20:25.551248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.208 [2024-11-03 10:20:25.551414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.208 [2024-11-03 10:20:25.551443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.208 [2024-11-03 10:20:25.551460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:25:57.208 [2024-11-03 10:20:25.551481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.208 [2024-11-03 10:20:25.564101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.208 [2024-11-03 10:20:25.564151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.208 [2024-11-03 10:20:25.564162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.517 ms 00:25:57.208 [2024-11-03 10:20:25.564171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.208 [2024-11-03 10:20:25.564223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.208 [2024-11-03 10:20:25.564250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.208 [2024-11-03 10:20:25.564264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.208 [2024-11-03 10:20:25.564275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.208 [2024-11-03 10:20:25.564812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.208 [2024-11-03 10:20:25.564845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.208 [2024-11-03 10:20:25.564864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:25:57.208 [2024-11-03 10:20:25.564873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.208 [2024-11-03 10:20:25.565032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.208 [2024-11-03 10:20:25.565045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.208 [2024-11-03 10:20:25.565055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:25:57.208 [2024-11-03 10:20:25.565064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.572359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.572397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.470 [2024-11-03 10:20:25.572416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.272 ms 00:25:57.470 [2024-11-03 10:20:25.572424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.576559] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:57.470 [2024-11-03 10:20:25.576607] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:57.470 [2024-11-03 10:20:25.576620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.576629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:57.470 [2024-11-03 10:20:25.576639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:25:57.470 [2024-11-03 10:20:25.576647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.592647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.592690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:57.470 [2024-11-03 10:20:25.592706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.935 ms 00:25:57.470 [2024-11-03 10:20:25.592714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.596144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.596189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:57.470 [2024-11-03 10:20:25.596216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.375 ms 00:25:57.470 [2024-11-03 10:20:25.596240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.599194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.599252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:57.470 [2024-11-03 10:20:25.599263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.902 ms 00:25:57.470 [2024-11-03 10:20:25.599271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.599659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.599685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.470 [2024-11-03 10:20:25.599696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:57.470 [2024-11-03 10:20:25.599704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.624673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.624726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:57.470 [2024-11-03 10:20:25.624745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.951 ms 00:25:57.470 [2024-11-03 10:20:25.624754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.632955] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:57.470 [2024-11-03 10:20:25.635884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.635928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.470 [2024-11-03 10:20:25.635946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.078 ms 00:25:57.470 [2024-11-03 10:20:25.635957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.636032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.636044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:57.470 [2024-11-03 10:20:25.636053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:57.470 [2024-11-03 10:20:25.636061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.636832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.636868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.470 [2024-11-03 10:20:25.636880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:25:57.470 [2024-11-03 10:20:25.636892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.636931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.636941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.470 [2024-11-03 10:20:25.636950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:57.470 [2024-11-03 10:20:25.636958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.636995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:57.470 [2024-11-03 10:20:25.637007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.637016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:57.470 [2024-11-03 10:20:25.637024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:57.470 [2024-11-03 10:20:25.637035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.642204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.642259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.470 [2024-11-03 10:20:25.642270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.148 ms 00:25:57.470 [2024-11-03 10:20:25.642278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.642360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-03 10:20:25.642371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.470 [2024-11-03 10:20:25.642386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:57.470 [2024-11-03 10:20:25.642394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-03 10:20:25.643442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.626 ms, result 0 00:25:58.857  [2024-11-03T10:20:28.162Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-03T10:20:29.106Z] Copying: 34/1024 [MB] (15 MBps) [2024-11-03T10:20:30.058Z] Copying: 49/1024 [MB] (15 MBps) [2024-11-03T10:20:31.002Z] Copying: 61/1024 [MB] (11 MBps) [2024-11-03T10:20:31.944Z] Copying: 72/1024 [MB] (10 MBps) [2024-11-03T10:20:32.890Z] Copying: 84/1024 [MB] (12 MBps) [2024-11-03T10:20:33.834Z] Copying: 95/1024 [MB] (10 MBps) [2024-11-03T10:20:35.258Z] Copying: 107/1024 [MB] (12 MBps) [2024-11-03T10:20:35.832Z] Copying: 129/1024 [MB] (21 MBps) [2024-11-03T10:20:37.221Z] Copying: 141/1024 [MB] (12 MBps) [2024-11-03T10:20:38.166Z] Copying: 155/1024 [MB] (13 MBps) [2024-11-03T10:20:39.111Z] Copying: 168/1024 [MB] (13 MBps) [2024-11-03T10:20:40.056Z] Copying: 183/1024 [MB] (14 MBps) [2024-11-03T10:20:41.000Z] Copying: 197/1024 [MB] (14 MBps) [2024-11-03T10:20:41.944Z] Copying: 221/1024 [MB] (23 MBps) [2024-11-03T10:20:42.889Z] Copying: 233/1024 [MB] (12 MBps) [2024-11-03T10:20:43.834Z] Copying: 254/1024 [MB] (20 MBps) [2024-11-03T10:20:45.223Z] Copying: 269/1024 [MB] (14 MBps) [2024-11-03T10:20:46.167Z] Copying: 285/1024 [MB] (16 MBps) [2024-11-03T10:20:47.112Z] Copying: 307/1024 [MB] (22 MBps) [2024-11-03T10:20:48.057Z] Copying: 330/1024 [MB] (22 MBps) [2024-11-03T10:20:49.000Z] Copying: 352/1024 [MB] (22 MBps) [2024-11-03T10:20:49.944Z] Copying: 369/1024 [MB] (16 MBps) [2024-11-03T10:20:50.889Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-03T10:20:51.833Z] Copying: 398/1024 [MB] (18 MBps) [2024-11-03T10:20:53.221Z] Copying: 417/1024 [MB] (18 MBps) [2024-11-03T10:20:54.165Z] Copying: 437/1024 [MB] (20 MBps) [2024-11-03T10:20:55.151Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-03T10:20:56.094Z] Copying: 459/1024 [MB] (11 MBps) [2024-11-03T10:20:57.038Z] Copying: 473/1024 [MB] (13 MBps) [2024-11-03T10:20:57.982Z] Copying: 483/1024 [MB] (10 MBps) [2024-11-03T10:20:58.926Z] Copying: 494/1024 [MB] (10 MBps) [2024-11-03T10:20:59.871Z] Copying: 515/1024 [MB] (21 MBps) [2024-11-03T10:21:01.257Z] Copying: 531/1024 [MB] (15 MBps) [2024-11-03T10:21:01.830Z] Copying: 554/1024 [MB] (23 MBps) [2024-11-03T10:21:03.216Z] Copying: 569/1024 [MB] (14 MBps) [2024-11-03T10:21:04.161Z] Copying: 584/1024 [MB] (14 MBps) [2024-11-03T10:21:05.104Z] Copying: 601/1024 [MB] (16 MBps) [2024-11-03T10:21:06.047Z] Copying: 621/1024 [MB] (20 MBps) [2024-11-03T10:21:06.991Z] Copying: 633/1024 [MB] (12 MBps) [2024-11-03T10:21:07.932Z] Copying: 646/1024 [MB] (12 MBps) [2024-11-03T10:21:08.871Z] Copying: 663/1024 [MB] (16 MBps) [2024-11-03T10:21:09.820Z] Copying: 674/1024 [MB] (11 MBps) [2024-11-03T10:21:11.198Z] Copying: 685/1024 [MB] (10 MBps) [2024-11-03T10:21:12.132Z] Copying: 698/1024 [MB] (12 MBps) [2024-11-03T10:21:13.075Z] Copying: 711/1024 [MB] (13 MBps) [2024-11-03T10:21:14.013Z] Copying: 724/1024 [MB] (12 MBps) [2024-11-03T10:21:14.962Z] Copying: 736/1024 [MB] (12 MBps) [2024-11-03T10:21:15.948Z] Copying: 749/1024 [MB] (13 MBps) [2024-11-03T10:21:16.884Z] Copying: 760/1024 [MB] (10 MBps) [2024-11-03T10:21:17.828Z] Copying: 773/1024 [MB] (13 MBps) [2024-11-03T10:21:19.210Z] Copying: 783/1024 [MB] (10 MBps) [2024-11-03T10:21:20.142Z] Copying: 796/1024 [MB] (12 MBps) [2024-11-03T10:21:21.075Z] Copying: 810/1024 [MB] (14 MBps) [2024-11-03T10:21:22.009Z] Copying: 823/1024 [MB] (12 MBps) [2024-11-03T10:21:22.943Z] Copying: 836/1024 [MB] (13 MBps) [2024-11-03T10:21:23.893Z] Copying: 849/1024 [MB] (13 MBps) [2024-11-03T10:21:24.837Z] Copying: 862/1024 [MB] (12 MBps) [2024-11-03T10:21:26.216Z] Copying: 873/1024 [MB] (10 MBps) [2024-11-03T10:21:27.152Z] Copying: 885/1024 [MB] (12 MBps) [2024-11-03T10:21:28.093Z] Copying: 897/1024 [MB] (12 MBps) [2024-11-03T10:21:29.035Z] Copying: 908/1024 [MB] (10 MBps) [2024-11-03T10:21:29.968Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-03T10:21:30.902Z] Copying: 932/1024 [MB] (13 MBps) [2024-11-03T10:21:31.836Z] Copying: 945/1024 [MB] (13 MBps) [2024-11-03T10:21:33.215Z] Copying: 958/1024 [MB] (12 MBps) [2024-11-03T10:21:34.149Z] Copying: 968/1024 [MB] (10 MBps) [2024-11-03T10:21:35.081Z] Copying: 981/1024 [MB] (12 MBps) [2024-11-03T10:21:36.050Z] Copying: 994/1024 [MB] (13 MBps) [2024-11-03T10:21:36.993Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-03T10:21:37.937Z] Copying: 1016/1024 [MB] (10 MBps) [2024-11-03T10:21:37.937Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 10:21:37.644383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.644483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:09.575 [2024-11-03 10:21:37.644506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:09.575 [2024-11-03 10:21:37.644530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.644568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:09.575 [2024-11-03 10:21:37.645591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.645646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:09.575 [2024-11-03 10:21:37.645663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:27:09.575 [2024-11-03 10:21:37.645676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.645996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.646020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:09.575 [2024-11-03 10:21:37.646033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:27:09.575 [2024-11-03 10:21:37.646045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.650842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.650895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:09.575 [2024-11-03 10:21:37.650905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:27:09.575 [2024-11-03 10:21:37.650914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.657179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.657220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:09.575 [2024-11-03 10:21:37.657242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.242 ms 00:27:09.575 [2024-11-03 10:21:37.657252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.660132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.660186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:09.575 [2024-11-03 10:21:37.660257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:27:09.575 [2024-11-03 10:21:37.660267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.666016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.666081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:09.575 [2024-11-03 10:21:37.666093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.702 ms 00:27:09.575 [2024-11-03 10:21:37.666106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.671123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.671168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:09.575 [2024-11-03 10:21:37.671180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.967 ms 00:27:09.575 [2024-11-03 10:21:37.671188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.674977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.675025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:09.575 [2024-11-03 10:21:37.675035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.771 ms 00:27:09.575 [2024-11-03 10:21:37.675043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.678042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.678086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:09.575 [2024-11-03 10:21:37.678096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:27:09.575 [2024-11-03 10:21:37.678104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.680610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.680657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:09.575 [2024-11-03 10:21:37.680666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:27:09.575 [2024-11-03 10:21:37.680673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.683221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.575 [2024-11-03 10:21:37.683278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:09.575 [2024-11-03 10:21:37.683288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.474 ms 00:27:09.575 [2024-11-03 10:21:37.683295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.575 [2024-11-03 10:21:37.683336] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:09.575 [2024-11-03 10:21:37.683362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:09.575 [2024-11-03 10:21:37.683375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:09.575 [2024-11-03 10:21:37.683385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.575 [2024-11-03 10:21:37.683484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.683994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:09.576 [2024-11-03 10:21:37.684245] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:09.576 [2024-11-03 10:21:37.684255] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e441cb6-b29b-4f7b-bae7-171e68ff8085 00:27:09.576 [2024-11-03 10:21:37.684264] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:09.576 [2024-11-03 10:21:37.684275] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:09.576 [2024-11-03 10:21:37.684288] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:09.577 [2024-11-03 10:21:37.684304] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:09.577 [2024-11-03 10:21:37.684313] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:09.577 [2024-11-03 10:21:37.684322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:09.577 [2024-11-03 10:21:37.684330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:09.577 [2024-11-03 10:21:37.684337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:09.577 [2024-11-03 10:21:37.684344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:09.577 [2024-11-03 10:21:37.684352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.577 [2024-11-03 10:21:37.684360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:09.577 [2024-11-03 10:21:37.684383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:27:09.577 [2024-11-03 10:21:37.684391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.687490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.577 [2024-11-03 10:21:37.687525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:09.577 [2024-11-03 10:21:37.687536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:27:09.577 [2024-11-03 10:21:37.687544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.687704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.577 [2024-11-03 10:21:37.687715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:09.577 [2024-11-03 10:21:37.687729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:27:09.577 [2024-11-03 10:21:37.687737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.696879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.696926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.577 [2024-11-03 10:21:37.696938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.696946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.697018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.697028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.577 [2024-11-03 10:21:37.697037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.697046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.697099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.697110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.577 [2024-11-03 10:21:37.697119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.697130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.697147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.697161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.577 [2024-11-03 10:21:37.697169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.697179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.716418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.716472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.577 [2024-11-03 10:21:37.716484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.716498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.731854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.731917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.577 [2024-11-03 10:21:37.731930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.731939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.732004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.732015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:09.577 [2024-11-03 10:21:37.732025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.732041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.732083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.732094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:09.577 [2024-11-03 10:21:37.732112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.732121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.732254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.732270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:09.577 [2024-11-03 10:21:37.732280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.732289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.732324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.732335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:09.577 [2024-11-03 10:21:37.732344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.732358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.732418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.732429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:09.577 [2024-11-03 10:21:37.732443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.732453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.732514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.577 [2024-11-03 10:21:37.732527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:09.577 [2024-11-03 10:21:37.732540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.577 [2024-11-03 10:21:37.732550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.577 [2024-11-03 10:21:37.732714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.294 ms, result 0 00:27:09.838 00:27:09.838 00:27:09.838 10:21:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:11.745 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:11.745 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:11.746 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:11.746 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:11.746 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:12.005 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:12.005 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:12.005 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:12.005 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89051 00:27:12.005 10:21:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89051 ']' 00:27:12.005 10:21:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89051 00:27:12.005 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89051) - No such process 00:27:12.005 Process with pid 89051 is not found 00:27:12.005 10:21:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89051 is not found' 00:27:12.005 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:12.266 Remove shared memory files 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:12.266 00:27:12.266 real 4m21.509s 00:27:12.266 user 4m50.533s 00:27:12.266 sys 0m28.115s 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:12.266 ************************************ 00:27:12.266 END TEST ftl_dirty_shutdown 00:27:12.266 10:21:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.266 ************************************ 00:27:12.528 10:21:40 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:12.528 10:21:40 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:12.528 10:21:40 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:12.528 10:21:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:12.528 ************************************ 00:27:12.528 START TEST ftl_upgrade_shutdown 00:27:12.528 ************************************ 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:12.528 * Looking for test storage... 00:27:12.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.528 --rc genhtml_branch_coverage=1 00:27:12.528 --rc genhtml_function_coverage=1 00:27:12.528 --rc genhtml_legend=1 00:27:12.528 --rc geninfo_all_blocks=1 00:27:12.528 --rc geninfo_unexecuted_blocks=1 00:27:12.528 00:27:12.528 ' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.528 --rc genhtml_branch_coverage=1 00:27:12.528 --rc genhtml_function_coverage=1 00:27:12.528 --rc genhtml_legend=1 00:27:12.528 --rc geninfo_all_blocks=1 00:27:12.528 --rc geninfo_unexecuted_blocks=1 00:27:12.528 00:27:12.528 ' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.528 --rc genhtml_branch_coverage=1 00:27:12.528 --rc genhtml_function_coverage=1 00:27:12.528 --rc genhtml_legend=1 00:27:12.528 --rc geninfo_all_blocks=1 00:27:12.528 --rc geninfo_unexecuted_blocks=1 00:27:12.528 00:27:12.528 ' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.528 --rc genhtml_branch_coverage=1 00:27:12.528 --rc genhtml_function_coverage=1 00:27:12.528 --rc genhtml_legend=1 00:27:12.528 --rc geninfo_all_blocks=1 00:27:12.528 --rc geninfo_unexecuted_blocks=1 00:27:12.528 00:27:12.528 ' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:12.528 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91869 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91869 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91869 ']' 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:12.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:12.529 10:21:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.789 [2024-11-03 10:21:40.931473] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:12.789 [2024-11-03 10:21:40.931590] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91869 ] 00:27:12.789 [2024-11-03 10:21:41.066947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.789 [2024-11-03 10:21:41.114441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:13.731 10:21:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:13.731 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:13.731 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:13.731 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:13.731 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:13.731 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:13.731 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:13.731 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:13.731 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:13.992 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:13.992 { 00:27:13.992 "name": "basen1", 00:27:13.992 "aliases": [ 00:27:13.992 "14bbed36-9183-42b1-9778-3a94132e0f86" 00:27:13.992 ], 00:27:13.992 "product_name": "NVMe disk", 00:27:13.992 "block_size": 4096, 00:27:13.992 "num_blocks": 1310720, 00:27:13.992 "uuid": "14bbed36-9183-42b1-9778-3a94132e0f86", 00:27:13.992 "numa_id": -1, 00:27:13.992 "assigned_rate_limits": { 00:27:13.992 "rw_ios_per_sec": 0, 00:27:13.992 "rw_mbytes_per_sec": 0, 00:27:13.992 "r_mbytes_per_sec": 0, 00:27:13.992 "w_mbytes_per_sec": 0 00:27:13.992 }, 00:27:13.992 "claimed": true, 00:27:13.992 "claim_type": "read_many_write_one", 00:27:13.992 "zoned": false, 00:27:13.992 "supported_io_types": { 00:27:13.992 "read": true, 00:27:13.992 "write": true, 00:27:13.992 "unmap": true, 00:27:13.992 "flush": true, 00:27:13.992 "reset": true, 00:27:13.992 "nvme_admin": true, 00:27:13.992 "nvme_io": true, 00:27:13.992 "nvme_io_md": false, 00:27:13.992 "write_zeroes": true, 00:27:13.992 "zcopy": false, 00:27:13.992 "get_zone_info": false, 00:27:13.992 "zone_management": false, 00:27:13.992 "zone_append": false, 00:27:13.992 "compare": true, 00:27:13.992 "compare_and_write": false, 00:27:13.992 "abort": true, 00:27:13.992 "seek_hole": false, 00:27:13.992 "seek_data": false, 00:27:13.992 "copy": true, 00:27:13.992 "nvme_iov_md": false 00:27:13.992 }, 00:27:13.992 "driver_specific": { 00:27:13.992 "nvme": [ 00:27:13.992 { 00:27:13.992 "pci_address": "0000:00:11.0", 00:27:13.992 "trid": { 00:27:13.992 "trtype": "PCIe", 00:27:13.992 "traddr": "0000:00:11.0" 00:27:13.992 }, 00:27:13.992 "ctrlr_data": { 00:27:13.992 "cntlid": 0, 00:27:13.992 "vendor_id": "0x1b36", 00:27:13.992 "model_number": "QEMU NVMe Ctrl", 00:27:13.992 "serial_number": "12341", 00:27:13.992 "firmware_revision": "8.0.0", 00:27:13.992 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:13.992 "oacs": { 00:27:13.992 "security": 0, 00:27:13.992 "format": 1, 00:27:13.992 "firmware": 0, 00:27:13.992 "ns_manage": 1 00:27:13.992 }, 00:27:13.992 "multi_ctrlr": false, 00:27:13.993 "ana_reporting": false 00:27:13.993 }, 00:27:13.993 "vs": { 00:27:13.993 "nvme_version": "1.4" 00:27:13.993 }, 00:27:13.993 "ns_data": { 00:27:13.993 "id": 1, 00:27:13.993 "can_share": false 00:27:13.993 } 00:27:13.993 } 00:27:13.993 ], 00:27:13.993 "mp_policy": "active_passive" 00:27:13.993 } 00:27:13.993 } 00:27:13.993 ]' 00:27:13.993 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:13.993 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:13.993 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:13.993 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:13.993 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:13.993 10:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:13.993 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:14.254 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:14.254 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:14.254 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:14.254 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:14.254 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=27eeed2e-eb8e-49be-ab50-29664a71ee68 00:27:14.254 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:14.254 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27eeed2e-eb8e-49be-ab50-29664a71ee68 00:27:14.514 10:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:14.775 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=448274d9-4c6f-44bf-a99c-c209ef7b7a50 00:27:14.775 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 448274d9-4c6f-44bf-a99c-c209ef7b7a50 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=72e96fc8-5da2-4043-9eea-a1c68b7fad15 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 72e96fc8-5da2-4043-9eea-a1c68b7fad15 ]] 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 72e96fc8-5da2-4043-9eea-a1c68b7fad15 5120 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=72e96fc8-5da2-4043-9eea-a1c68b7fad15 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 72e96fc8-5da2-4043-9eea-a1c68b7fad15 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=72e96fc8-5da2-4043-9eea-a1c68b7fad15 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:15.034 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72e96fc8-5da2-4043-9eea-a1c68b7fad15 00:27:15.292 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:15.293 { 00:27:15.293 "name": "72e96fc8-5da2-4043-9eea-a1c68b7fad15", 00:27:15.293 "aliases": [ 00:27:15.293 "lvs/basen1p0" 00:27:15.293 ], 00:27:15.293 "product_name": "Logical Volume", 00:27:15.293 "block_size": 4096, 00:27:15.293 "num_blocks": 5242880, 00:27:15.293 "uuid": "72e96fc8-5da2-4043-9eea-a1c68b7fad15", 00:27:15.293 "assigned_rate_limits": { 00:27:15.293 "rw_ios_per_sec": 0, 00:27:15.293 "rw_mbytes_per_sec": 0, 00:27:15.293 "r_mbytes_per_sec": 0, 00:27:15.293 "w_mbytes_per_sec": 0 00:27:15.293 }, 00:27:15.293 "claimed": false, 00:27:15.293 "zoned": false, 00:27:15.293 "supported_io_types": { 00:27:15.293 "read": true, 00:27:15.293 "write": true, 00:27:15.293 "unmap": true, 00:27:15.293 "flush": false, 00:27:15.293 "reset": true, 00:27:15.293 "nvme_admin": false, 00:27:15.293 "nvme_io": false, 00:27:15.293 "nvme_io_md": false, 00:27:15.293 "write_zeroes": true, 00:27:15.293 "zcopy": false, 00:27:15.293 "get_zone_info": false, 00:27:15.293 "zone_management": false, 00:27:15.293 "zone_append": false, 00:27:15.293 "compare": false, 00:27:15.293 "compare_and_write": false, 00:27:15.293 "abort": false, 00:27:15.293 "seek_hole": true, 00:27:15.293 "seek_data": true, 00:27:15.293 "copy": false, 00:27:15.293 "nvme_iov_md": false 00:27:15.293 }, 00:27:15.293 "driver_specific": { 00:27:15.293 "lvol": { 00:27:15.293 "lvol_store_uuid": "448274d9-4c6f-44bf-a99c-c209ef7b7a50", 00:27:15.293 "base_bdev": "basen1", 00:27:15.293 "thin_provision": true, 00:27:15.293 "num_allocated_clusters": 0, 00:27:15.293 "snapshot": false, 00:27:15.293 "clone": false, 00:27:15.293 "esnap_clone": false 00:27:15.293 } 00:27:15.293 } 00:27:15.293 } 00:27:15.293 ]' 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:15.293 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:15.563 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:15.563 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:15.563 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:15.828 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:15.828 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:15.828 10:21:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 72e96fc8-5da2-4043-9eea-a1c68b7fad15 -c cachen1p0 --l2p_dram_limit 2 00:27:15.828 [2024-11-03 10:21:44.157984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.158027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:15.828 [2024-11-03 10:21:44.158039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:15.828 [2024-11-03 10:21:44.158050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.158088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.158100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:15.828 [2024-11-03 10:21:44.158106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:15.828 [2024-11-03 10:21:44.158115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.158132] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:15.828 [2024-11-03 10:21:44.158322] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:15.828 [2024-11-03 10:21:44.158335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.158343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:15.828 [2024-11-03 10:21:44.158351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:27:15.828 [2024-11-03 10:21:44.158359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.158380] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 47664418-c017-4bf2-9d98-9954191b2040 00:27:15.828 [2024-11-03 10:21:44.159636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.159664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:15.828 [2024-11-03 10:21:44.159674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:15.828 [2024-11-03 10:21:44.159680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.166561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.166584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:15.828 [2024-11-03 10:21:44.166593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.815 ms 00:27:15.828 [2024-11-03 10:21:44.166600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.166637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.166643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:15.828 [2024-11-03 10:21:44.166654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:15.828 [2024-11-03 10:21:44.166662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.166696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.166704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:15.828 [2024-11-03 10:21:44.166712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:15.828 [2024-11-03 10:21:44.166718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.166736] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:15.828 [2024-11-03 10:21:44.168353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.168374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:15.828 [2024-11-03 10:21:44.168384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.622 ms 00:27:15.828 [2024-11-03 10:21:44.168391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.168412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.168420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:15.828 [2024-11-03 10:21:44.168426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:15.828 [2024-11-03 10:21:44.168436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.168449] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:15.828 [2024-11-03 10:21:44.168566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:15.828 [2024-11-03 10:21:44.168577] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:15.828 [2024-11-03 10:21:44.168587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:15.828 [2024-11-03 10:21:44.168595] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:15.828 [2024-11-03 10:21:44.168603] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:15.828 [2024-11-03 10:21:44.168610] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:15.828 [2024-11-03 10:21:44.168622] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:15.828 [2024-11-03 10:21:44.168628] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:15.828 [2024-11-03 10:21:44.168636] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:15.828 [2024-11-03 10:21:44.168643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.168650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:15.828 [2024-11-03 10:21:44.168657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:27:15.828 [2024-11-03 10:21:44.168664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.168730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.828 [2024-11-03 10:21:44.168740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:15.828 [2024-11-03 10:21:44.168747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:15.828 [2024-11-03 10:21:44.168757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.828 [2024-11-03 10:21:44.168830] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:15.828 [2024-11-03 10:21:44.168841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:15.828 [2024-11-03 10:21:44.168847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:15.828 [2024-11-03 10:21:44.168855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.828 [2024-11-03 10:21:44.168861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:15.828 [2024-11-03 10:21:44.168868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:15.828 [2024-11-03 10:21:44.168873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:15.828 [2024-11-03 10:21:44.168881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:15.828 [2024-11-03 10:21:44.168886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:15.828 [2024-11-03 10:21:44.168893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.828 [2024-11-03 10:21:44.168898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:15.828 [2024-11-03 10:21:44.168906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:15.828 [2024-11-03 10:21:44.168911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.828 [2024-11-03 10:21:44.168921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:15.828 [2024-11-03 10:21:44.168926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:15.828 [2024-11-03 10:21:44.168933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.828 [2024-11-03 10:21:44.168939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:15.828 [2024-11-03 10:21:44.168945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:15.828 [2024-11-03 10:21:44.168950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.828 [2024-11-03 10:21:44.168958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:15.828 [2024-11-03 10:21:44.168963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:15.828 [2024-11-03 10:21:44.168971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.828 [2024-11-03 10:21:44.168977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:15.829 [2024-11-03 10:21:44.168984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:15.829 [2024-11-03 10:21:44.168989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.829 [2024-11-03 10:21:44.168996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:15.829 [2024-11-03 10:21:44.169000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:15.829 [2024-11-03 10:21:44.169007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.829 [2024-11-03 10:21:44.169013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:15.829 [2024-11-03 10:21:44.169022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:15.829 [2024-11-03 10:21:44.169027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.829 [2024-11-03 10:21:44.169034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:15.829 [2024-11-03 10:21:44.169039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:15.829 [2024-11-03 10:21:44.169045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.829 [2024-11-03 10:21:44.169054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:15.829 [2024-11-03 10:21:44.169060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:15.829 [2024-11-03 10:21:44.169065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.829 [2024-11-03 10:21:44.169072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:15.829 [2024-11-03 10:21:44.169078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:15.829 [2024-11-03 10:21:44.169085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.829 [2024-11-03 10:21:44.169090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:15.829 [2024-11-03 10:21:44.169096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:15.829 [2024-11-03 10:21:44.169101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.829 [2024-11-03 10:21:44.169109] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:15.829 [2024-11-03 10:21:44.169114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:15.829 [2024-11-03 10:21:44.169124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:15.829 [2024-11-03 10:21:44.169130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.829 [2024-11-03 10:21:44.169138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:15.829 [2024-11-03 10:21:44.169147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:15.829 [2024-11-03 10:21:44.169153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:15.829 [2024-11-03 10:21:44.169159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:15.829 [2024-11-03 10:21:44.169165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:15.829 [2024-11-03 10:21:44.169170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:15.829 [2024-11-03 10:21:44.169180] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:15.829 [2024-11-03 10:21:44.169188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:15.829 [2024-11-03 10:21:44.169202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:15.829 [2024-11-03 10:21:44.169222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:15.829 [2024-11-03 10:21:44.169240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:15.829 [2024-11-03 10:21:44.169250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:15.829 [2024-11-03 10:21:44.169255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:15.829 [2024-11-03 10:21:44.169301] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:15.829 [2024-11-03 10:21:44.169311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:15.829 [2024-11-03 10:21:44.169426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:15.829 [2024-11-03 10:21:44.169434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:15.829 [2024-11-03 10:21:44.169439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:15.829 [2024-11-03 10:21:44.169446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.829 [2024-11-03 10:21:44.169452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:15.829 [2024-11-03 10:21:44.169460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.666 ms 00:27:15.829 [2024-11-03 10:21:44.169466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.829 [2024-11-03 10:21:44.169497] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:15.829 [2024-11-03 10:21:44.169504] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:20.032 [2024-11-03 10:21:48.144793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.144850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:20.032 [2024-11-03 10:21:48.144868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3975.279 ms 00:27:20.032 [2024-11-03 10:21:48.144875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.155051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.155087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:20.032 [2024-11-03 10:21:48.155099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.094 ms 00:27:20.032 [2024-11-03 10:21:48.155106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.155152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.155159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:20.032 [2024-11-03 10:21:48.155170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:20.032 [2024-11-03 10:21:48.155176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.164432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.164462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:20.032 [2024-11-03 10:21:48.164473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.222 ms 00:27:20.032 [2024-11-03 10:21:48.164480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.164503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.164510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:20.032 [2024-11-03 10:21:48.164521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:20.032 [2024-11-03 10:21:48.164527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.164921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.164936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:20.032 [2024-11-03 10:21:48.164946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:27:20.032 [2024-11-03 10:21:48.164953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.164989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.164997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:20.032 [2024-11-03 10:21:48.165005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:20.032 [2024-11-03 10:21:48.165014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.184937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.184984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:20.032 [2024-11-03 10:21:48.185002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.897 ms 00:27:20.032 [2024-11-03 10:21:48.185013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.196042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:20.032 [2024-11-03 10:21:48.197183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.197212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:20.032 [2024-11-03 10:21:48.197220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.059 ms 00:27:20.032 [2024-11-03 10:21:48.197239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.213072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.213104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:20.032 [2024-11-03 10:21:48.213113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.813 ms 00:27:20.032 [2024-11-03 10:21:48.213123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.213202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.213212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:20.032 [2024-11-03 10:21:48.213219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:20.032 [2024-11-03 10:21:48.213237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.216287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.216318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:20.032 [2024-11-03 10:21:48.216326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.034 ms 00:27:20.032 [2024-11-03 10:21:48.216334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.219196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.219233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:20.032 [2024-11-03 10:21:48.219240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.830 ms 00:27:20.032 [2024-11-03 10:21:48.219248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.219476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.219490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:20.032 [2024-11-03 10:21:48.219497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:27:20.032 [2024-11-03 10:21:48.219507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.251911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.251943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:20.032 [2024-11-03 10:21:48.251952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.390 ms 00:27:20.032 [2024-11-03 10:21:48.251960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.256517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.256546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:20.032 [2024-11-03 10:21:48.256554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.520 ms 00:27:20.032 [2024-11-03 10:21:48.256563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.260002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.260030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:20.032 [2024-11-03 10:21:48.260037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.407 ms 00:27:20.032 [2024-11-03 10:21:48.260044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.264025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.264056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:20.032 [2024-11-03 10:21:48.264064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.952 ms 00:27:20.032 [2024-11-03 10:21:48.264073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.264108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.264118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:20.032 [2024-11-03 10:21:48.264126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:20.032 [2024-11-03 10:21:48.264134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.264188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.032 [2024-11-03 10:21:48.264207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:20.032 [2024-11-03 10:21:48.264214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:20.032 [2024-11-03 10:21:48.264222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.032 [2024-11-03 10:21:48.265033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4106.676 ms, result 0 00:27:20.032 { 00:27:20.032 "name": "ftl", 00:27:20.032 "uuid": "47664418-c017-4bf2-9d98-9954191b2040" 00:27:20.032 } 00:27:20.032 10:21:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:20.291 [2024-11-03 10:21:48.471181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.291 10:21:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:20.549 10:21:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:20.549 [2024-11-03 10:21:48.879488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:20.549 10:21:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:20.807 [2024-11-03 10:21:49.071783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:20.807 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:21.065 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:21.066 Fill FTL, iteration 1 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91992 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91992 /var/tmp/spdk.tgt.sock 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91992 ']' 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.066 10:21:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:21.324 [2024-11-03 10:21:49.473123] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:21.324 [2024-11-03 10:21:49.473257] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91992 ] 00:27:21.324 [2024-11-03 10:21:49.609307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.324 [2024-11-03 10:21:49.642402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.257 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.257 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:22.257 10:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:22.257 ftln1 00:27:22.257 10:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:22.257 10:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91992 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91992 ']' 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91992 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91992 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:22.515 killing process with pid 91992 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91992' 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91992 00:27:22.515 10:21:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91992 00:27:22.773 10:21:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:22.773 10:21:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:23.031 [2024-11-03 10:21:51.138953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:23.031 [2024-11-03 10:21:51.139066] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92023 ] 00:27:23.031 [2024-11-03 10:21:51.275794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.031 [2024-11-03 10:21:51.308710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.409  [2024-11-03T10:21:53.713Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-03T10:21:54.654Z] Copying: 414/1024 [MB] (220 MBps) [2024-11-03T10:21:55.639Z] Copying: 661/1024 [MB] (247 MBps) [2024-11-03T10:21:56.220Z] Copying: 921/1024 [MB] (260 MBps) [2024-11-03T10:21:56.220Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:27:27.858 00:27:27.858 Calculate MD5 checksum, iteration 1 00:27:27.858 10:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:27.858 10:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:27.858 10:21:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:27.858 10:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:27.858 10:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:27.858 10:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:27.858 10:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:27.858 10:21:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:27.858 [2024-11-03 10:21:56.131291] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:27.858 [2024-11-03 10:21:56.131385] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92076 ] 00:27:28.118 [2024-11-03 10:21:56.261786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.118 [2024-11-03 10:21:56.295059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.499  [2024-11-03T10:21:58.121Z] Copying: 639/1024 [MB] (639 MBps) [2024-11-03T10:21:58.380Z] Copying: 1024/1024 [MB] (average 644 MBps) 00:27:30.018 00:27:30.018 10:21:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:30.018 10:21:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:32.553 Fill FTL, iteration 2 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=895d5ed50f5cedd09ac23a7bfc55b97e 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:32.553 10:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:32.553 [2024-11-03 10:22:00.337044] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:32.553 [2024-11-03 10:22:00.337248] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92126 ] 00:27:32.553 [2024-11-03 10:22:00.469251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.553 [2024-11-03 10:22:00.501545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.491  [2024-11-03T10:22:02.794Z] Copying: 191/1024 [MB] (191 MBps) [2024-11-03T10:22:03.733Z] Copying: 415/1024 [MB] (224 MBps) [2024-11-03T10:22:05.110Z] Copying: 664/1024 [MB] (249 MBps) [2024-11-03T10:22:05.110Z] Copying: 923/1024 [MB] (259 MBps) [2024-11-03T10:22:05.368Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:27:37.006 00:27:37.006 Calculate MD5 checksum, iteration 2 00:27:37.006 10:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:37.006 10:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:37.006 10:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:37.006 10:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:37.006 10:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:37.006 10:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:37.006 10:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:37.006 10:22:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:37.006 [2024-11-03 10:22:05.302180] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:37.006 [2024-11-03 10:22:05.302297] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92173 ] 00:27:37.265 [2024-11-03 10:22:05.437779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.265 [2024-11-03 10:22:05.465820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.646  [2024-11-03T10:22:07.578Z] Copying: 674/1024 [MB] (674 MBps) [2024-11-03T10:22:07.838Z] Copying: 1024/1024 [MB] (average 666 MBps) 00:27:39.476 00:27:39.476 10:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:39.476 10:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:42.008 10:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:42.008 10:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6f5945ee997b84d29261bef086189817 00:27:42.008 10:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:42.008 10:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:42.008 10:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:42.008 [2024-11-03 10:22:10.000769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.008 [2024-11-03 10:22:10.000820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:42.008 [2024-11-03 10:22:10.000833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:42.008 [2024-11-03 10:22:10.000840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.008 [2024-11-03 10:22:10.000858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.008 [2024-11-03 10:22:10.000865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:42.008 [2024-11-03 10:22:10.000875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:42.008 [2024-11-03 10:22:10.000880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.008 [2024-11-03 10:22:10.000900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.008 [2024-11-03 10:22:10.000907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:42.008 [2024-11-03 10:22:10.000913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:42.008 [2024-11-03 10:22:10.000919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.008 [2024-11-03 10:22:10.000979] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.192 ms, result 0 00:27:42.008 true 00:27:42.008 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:42.008 { 00:27:42.008 "name": "ftl", 00:27:42.008 "properties": [ 00:27:42.008 { 00:27:42.008 "name": "superblock_version", 00:27:42.008 "value": 5, 00:27:42.008 "read-only": true 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "name": "base_device", 00:27:42.008 "bands": [ 00:27:42.008 { 00:27:42.008 "id": 0, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 1, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 2, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 3, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 4, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 5, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 6, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 7, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 8, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 9, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 10, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 11, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 12, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 13, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 14, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 15, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 16, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 17, 00:27:42.008 "state": "FREE", 00:27:42.008 "validity": 0.0 00:27:42.008 } 00:27:42.008 ], 00:27:42.008 "read-only": true 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "name": "cache_device", 00:27:42.008 "type": "bdev", 00:27:42.008 "chunks": [ 00:27:42.008 { 00:27:42.008 "id": 0, 00:27:42.008 "state": "INACTIVE", 00:27:42.008 "utilization": 0.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 1, 00:27:42.008 "state": "CLOSED", 00:27:42.008 "utilization": 1.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 2, 00:27:42.008 "state": "CLOSED", 00:27:42.008 "utilization": 1.0 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 3, 00:27:42.008 "state": "OPEN", 00:27:42.008 "utilization": 0.001953125 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "id": 4, 00:27:42.008 "state": "OPEN", 00:27:42.008 "utilization": 0.0 00:27:42.008 } 00:27:42.008 ], 00:27:42.008 "read-only": true 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "name": "verbose_mode", 00:27:42.008 "value": true, 00:27:42.008 "unit": "", 00:27:42.008 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:42.008 }, 00:27:42.008 { 00:27:42.008 "name": "prep_upgrade_on_shutdown", 00:27:42.008 "value": false, 00:27:42.008 "unit": "", 00:27:42.008 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:42.008 } 00:27:42.008 ] 00:27:42.008 } 00:27:42.008 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:42.008 [2024-11-03 10:22:10.365018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.008 [2024-11-03 10:22:10.365049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:42.008 [2024-11-03 10:22:10.365057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:42.008 [2024-11-03 10:22:10.365063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.008 [2024-11-03 10:22:10.365079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.008 [2024-11-03 10:22:10.365085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:42.008 [2024-11-03 10:22:10.365091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:42.008 [2024-11-03 10:22:10.365097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.008 [2024-11-03 10:22:10.365111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.008 [2024-11-03 10:22:10.365117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:42.008 [2024-11-03 10:22:10.365123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:42.008 [2024-11-03 10:22:10.365128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.008 [2024-11-03 10:22:10.365171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.140 ms, result 0 00:27:42.267 true 00:27:42.267 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:42.267 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:42.267 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:42.267 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:42.267 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:42.267 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:42.526 [2024-11-03 10:22:10.729345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.526 [2024-11-03 10:22:10.729373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:42.526 [2024-11-03 10:22:10.729380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:42.526 [2024-11-03 10:22:10.729386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.526 [2024-11-03 10:22:10.729401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.526 [2024-11-03 10:22:10.729407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:42.526 [2024-11-03 10:22:10.729413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:42.526 [2024-11-03 10:22:10.729418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.526 [2024-11-03 10:22:10.729433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.526 [2024-11-03 10:22:10.729439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:42.526 [2024-11-03 10:22:10.729444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:42.526 [2024-11-03 10:22:10.729449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.526 [2024-11-03 10:22:10.729487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.132 ms, result 0 00:27:42.526 true 00:27:42.526 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:42.785 { 00:27:42.785 "name": "ftl", 00:27:42.785 "properties": [ 00:27:42.785 { 00:27:42.785 "name": "superblock_version", 00:27:42.785 "value": 5, 00:27:42.785 "read-only": true 00:27:42.785 }, 00:27:42.785 { 00:27:42.785 "name": "base_device", 00:27:42.785 "bands": [ 00:27:42.785 { 00:27:42.785 "id": 0, 00:27:42.785 "state": "FREE", 00:27:42.785 "validity": 0.0 00:27:42.785 }, 00:27:42.785 { 00:27:42.785 "id": 1, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 2, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 3, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 4, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 5, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 6, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 7, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 8, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 9, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 10, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 11, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 12, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 13, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 14, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 15, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 16, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 17, 00:27:42.786 "state": "FREE", 00:27:42.786 "validity": 0.0 00:27:42.786 } 00:27:42.786 ], 00:27:42.786 "read-only": true 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "name": "cache_device", 00:27:42.786 "type": "bdev", 00:27:42.786 "chunks": [ 00:27:42.786 { 00:27:42.786 "id": 0, 00:27:42.786 "state": "INACTIVE", 00:27:42.786 "utilization": 0.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 1, 00:27:42.786 "state": "CLOSED", 00:27:42.786 "utilization": 1.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 2, 00:27:42.786 "state": "CLOSED", 00:27:42.786 "utilization": 1.0 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 3, 00:27:42.786 "state": "OPEN", 00:27:42.786 "utilization": 0.001953125 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "id": 4, 00:27:42.786 "state": "OPEN", 00:27:42.786 "utilization": 0.0 00:27:42.786 } 00:27:42.786 ], 00:27:42.786 "read-only": true 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "name": "verbose_mode", 00:27:42.786 "value": true, 00:27:42.786 "unit": "", 00:27:42.786 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:42.786 }, 00:27:42.786 { 00:27:42.786 "name": "prep_upgrade_on_shutdown", 00:27:42.786 "value": true, 00:27:42.786 "unit": "", 00:27:42.786 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:42.786 } 00:27:42.786 ] 00:27:42.786 } 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91869 ]] 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91869 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91869 ']' 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91869 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91869 00:27:42.786 killing process with pid 91869 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91869' 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91869 00:27:42.786 10:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91869 00:27:42.786 [2024-11-03 10:22:11.040848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:42.786 [2024-11-03 10:22:11.047544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.786 [2024-11-03 10:22:11.047574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:42.786 [2024-11-03 10:22:11.047585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:42.786 [2024-11-03 10:22:11.047591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.786 [2024-11-03 10:22:11.047610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:42.786 [2024-11-03 10:22:11.048120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.786 [2024-11-03 10:22:11.048140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:42.786 [2024-11-03 10:22:11.048148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:27:42.786 [2024-11-03 10:22:11.048155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.661622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.661716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:52.788 [2024-11-03 10:22:19.661735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8613.409 ms 00:27:52.788 [2024-11-03 10:22:19.661751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.663430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.663465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:52.788 [2024-11-03 10:22:19.663477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.660 ms 00:27:52.788 [2024-11-03 10:22:19.663486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.664699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.664725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:52.788 [2024-11-03 10:22:19.664736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.175 ms 00:27:52.788 [2024-11-03 10:22:19.664753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.668585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.668636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:52.788 [2024-11-03 10:22:19.668649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.781 ms 00:27:52.788 [2024-11-03 10:22:19.668659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.672596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.672646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:52.788 [2024-11-03 10:22:19.672660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.886 ms 00:27:52.788 [2024-11-03 10:22:19.672671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.672762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.672775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:52.788 [2024-11-03 10:22:19.672796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:52.788 [2024-11-03 10:22:19.672809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.675636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.675695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:52.788 [2024-11-03 10:22:19.675708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.805 ms 00:27:52.788 [2024-11-03 10:22:19.675717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.678529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.678576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:52.788 [2024-11-03 10:22:19.678588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.756 ms 00:27:52.788 [2024-11-03 10:22:19.678597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.681326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.681373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:52.788 [2024-11-03 10:22:19.681383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.675 ms 00:27:52.788 [2024-11-03 10:22:19.681391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.683813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.683871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:52.788 [2024-11-03 10:22:19.683881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.335 ms 00:27:52.788 [2024-11-03 10:22:19.683890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.683936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:52.788 [2024-11-03 10:22:19.683953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:52.788 [2024-11-03 10:22:19.683965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:52.788 [2024-11-03 10:22:19.683974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:52.788 [2024-11-03 10:22:19.683983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.683993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:52.788 [2024-11-03 10:22:19.684113] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:52.788 [2024-11-03 10:22:19.684122] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 47664418-c017-4bf2-9d98-9954191b2040 00:27:52.788 [2024-11-03 10:22:19.684131] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:52.788 [2024-11-03 10:22:19.684139] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:52.788 [2024-11-03 10:22:19.684146] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:52.788 [2024-11-03 10:22:19.684155] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:52.788 [2024-11-03 10:22:19.684167] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:52.788 [2024-11-03 10:22:19.684184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:52.788 [2024-11-03 10:22:19.684192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:52.788 [2024-11-03 10:22:19.684213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:52.788 [2024-11-03 10:22:19.684220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:52.788 [2024-11-03 10:22:19.684249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.684261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:52.788 [2024-11-03 10:22:19.684271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.314 ms 00:27:52.788 [2024-11-03 10:22:19.684281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.687419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.687457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:52.788 [2024-11-03 10:22:19.687469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.118 ms 00:27:52.788 [2024-11-03 10:22:19.687485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.788 [2024-11-03 10:22:19.687637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.788 [2024-11-03 10:22:19.687648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:52.788 [2024-11-03 10:22:19.687657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.125 ms 00:27:52.788 [2024-11-03 10:22:19.687665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.698785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.698840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:52.789 [2024-11-03 10:22:19.698857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.698866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.698908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.698918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:52.789 [2024-11-03 10:22:19.698928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.698944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.699029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.699041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:52.789 [2024-11-03 10:22:19.699051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.699064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.699083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.699093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:52.789 [2024-11-03 10:22:19.699102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.699112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.718057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.718106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:52.789 [2024-11-03 10:22:19.718119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.718136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.732848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.732900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:52.789 [2024-11-03 10:22:19.732912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.732922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.733047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.733060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:52.789 [2024-11-03 10:22:19.733070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.733079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.733138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.733149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:52.789 [2024-11-03 10:22:19.733158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.733167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.733272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.733284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:52.789 [2024-11-03 10:22:19.733293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.733302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.733338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.733353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:52.789 [2024-11-03 10:22:19.733362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.733377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.733433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.733443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:52.789 [2024-11-03 10:22:19.733453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.733463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.733529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:52.789 [2024-11-03 10:22:19.733541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:52.789 [2024-11-03 10:22:19.733551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:52.789 [2024-11-03 10:22:19.733563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.789 [2024-11-03 10:22:19.733732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8686.098 ms, result 0 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92360 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92360 00:27:55.368 10:22:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92360 ']' 00:27:55.369 10:22:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:55.369 10:22:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.369 10:22:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.369 10:22:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.369 10:22:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.369 10:22:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.369 [2024-11-03 10:22:23.375275] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:55.369 [2024-11-03 10:22:23.375405] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92360 ] 00:27:55.369 [2024-11-03 10:22:23.507547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.369 [2024-11-03 10:22:23.548301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.627 [2024-11-03 10:22:23.842584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:55.627 [2024-11-03 10:22:23.842640] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:55.886 [2024-11-03 10:22:23.988570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:23.988605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:55.886 [2024-11-03 10:22:23.988618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:55.886 [2024-11-03 10:22:23.988628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:23.988674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:23.988681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:55.886 [2024-11-03 10:22:23.988688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:55.886 [2024-11-03 10:22:23.988697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:23.988713] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:55.886 [2024-11-03 10:22:23.988911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:55.886 [2024-11-03 10:22:23.988924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:23.988930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:55.886 [2024-11-03 10:22:23.988939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:27:55.886 [2024-11-03 10:22:23.988945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:23.990175] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:55.886 [2024-11-03 10:22:23.993015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:23.993048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:55.886 [2024-11-03 10:22:23.993056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.842 ms 00:27:55.886 [2024-11-03 10:22:23.993065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:23.993114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:23.993121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:55.886 [2024-11-03 10:22:23.993128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:55.886 [2024-11-03 10:22:23.993133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:23.999210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:23.999242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:55.886 [2024-11-03 10:22:23.999250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.028 ms 00:27:55.886 [2024-11-03 10:22:23.999256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:23.999290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:23.999297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:55.886 [2024-11-03 10:22:23.999303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:55.886 [2024-11-03 10:22:23.999309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:23.999343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:23.999353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:55.886 [2024-11-03 10:22:23.999361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:55.886 [2024-11-03 10:22:23.999366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:23.999382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:55.886 [2024-11-03 10:22:24.000906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:24.000929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:55.886 [2024-11-03 10:22:24.000936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.528 ms 00:27:55.886 [2024-11-03 10:22:24.000942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:24.000965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:24.000972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:55.886 [2024-11-03 10:22:24.000981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:55.886 [2024-11-03 10:22:24.000990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:24.001006] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:55.886 [2024-11-03 10:22:24.001023] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:55.886 [2024-11-03 10:22:24.001054] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:55.886 [2024-11-03 10:22:24.001066] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:55.886 [2024-11-03 10:22:24.001149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:55.886 [2024-11-03 10:22:24.001160] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:55.886 [2024-11-03 10:22:24.001169] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:55.886 [2024-11-03 10:22:24.001179] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:55.886 [2024-11-03 10:22:24.001186] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:55.886 [2024-11-03 10:22:24.001192] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:55.886 [2024-11-03 10:22:24.001197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:55.886 [2024-11-03 10:22:24.001203] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:55.886 [2024-11-03 10:22:24.001208] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:55.886 [2024-11-03 10:22:24.001214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:24.001221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:55.886 [2024-11-03 10:22:24.001239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:27:55.886 [2024-11-03 10:22:24.001249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:24.001315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.886 [2024-11-03 10:22:24.001322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:55.886 [2024-11-03 10:22:24.001328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:55.886 [2024-11-03 10:22:24.001334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.886 [2024-11-03 10:22:24.001411] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:55.886 [2024-11-03 10:22:24.001422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:55.886 [2024-11-03 10:22:24.001429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:55.886 [2024-11-03 10:22:24.001434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.886 [2024-11-03 10:22:24.001443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:55.886 [2024-11-03 10:22:24.001450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:55.886 [2024-11-03 10:22:24.001455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:55.886 [2024-11-03 10:22:24.001461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:55.886 [2024-11-03 10:22:24.001467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:55.886 [2024-11-03 10:22:24.001472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.886 [2024-11-03 10:22:24.001477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:55.886 [2024-11-03 10:22:24.001483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:55.886 [2024-11-03 10:22:24.001490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.886 [2024-11-03 10:22:24.001496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:55.886 [2024-11-03 10:22:24.001502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:55.886 [2024-11-03 10:22:24.001506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.886 [2024-11-03 10:22:24.001512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:55.886 [2024-11-03 10:22:24.001520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:55.886 [2024-11-03 10:22:24.001525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.886 [2024-11-03 10:22:24.001531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:55.886 [2024-11-03 10:22:24.001536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:55.886 [2024-11-03 10:22:24.001542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:55.886 [2024-11-03 10:22:24.001547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:55.886 [2024-11-03 10:22:24.001552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:55.886 [2024-11-03 10:22:24.001557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:55.886 [2024-11-03 10:22:24.001563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:55.886 [2024-11-03 10:22:24.001568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:55.886 [2024-11-03 10:22:24.001573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:55.887 [2024-11-03 10:22:24.001579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:55.887 [2024-11-03 10:22:24.001584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:55.887 [2024-11-03 10:22:24.001589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:55.887 [2024-11-03 10:22:24.001594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:55.887 [2024-11-03 10:22:24.001600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:55.887 [2024-11-03 10:22:24.001607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.887 [2024-11-03 10:22:24.001612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:55.887 [2024-11-03 10:22:24.001618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:55.887 [2024-11-03 10:22:24.001624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.887 [2024-11-03 10:22:24.001629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:55.887 [2024-11-03 10:22:24.001634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:55.887 [2024-11-03 10:22:24.001639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.887 [2024-11-03 10:22:24.001645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:55.887 [2024-11-03 10:22:24.001650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:55.887 [2024-11-03 10:22:24.001655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.887 [2024-11-03 10:22:24.001660] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:55.887 [2024-11-03 10:22:24.001667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:55.887 [2024-11-03 10:22:24.001679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:55.887 [2024-11-03 10:22:24.001685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:55.887 [2024-11-03 10:22:24.001691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:55.887 [2024-11-03 10:22:24.001696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:55.887 [2024-11-03 10:22:24.001703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:55.887 [2024-11-03 10:22:24.001709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:55.887 [2024-11-03 10:22:24.001714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:55.887 [2024-11-03 10:22:24.001719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:55.887 [2024-11-03 10:22:24.001725] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:55.887 [2024-11-03 10:22:24.001735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:55.887 [2024-11-03 10:22:24.001747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:55.887 [2024-11-03 10:22:24.001764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:55.887 [2024-11-03 10:22:24.001769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:55.887 [2024-11-03 10:22:24.001774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:55.887 [2024-11-03 10:22:24.001779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:55.887 [2024-11-03 10:22:24.001822] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:55.887 [2024-11-03 10:22:24.001828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:55.887 [2024-11-03 10:22:24.001841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:55.887 [2024-11-03 10:22:24.001847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:55.887 [2024-11-03 10:22:24.001853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:55.887 [2024-11-03 10:22:24.001864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.887 [2024-11-03 10:22:24.001870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:55.887 [2024-11-03 10:22:24.001877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.506 ms 00:27:55.887 [2024-11-03 10:22:24.001884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.887 [2024-11-03 10:22:24.001925] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:55.887 [2024-11-03 10:22:24.001938] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:59.174 [2024-11-03 10:22:27.446708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.446761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:59.174 [2024-11-03 10:22:27.446774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3444.769 ms 00:27:59.174 [2024-11-03 10:22:27.446781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.456898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.456933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:59.174 [2024-11-03 10:22:27.456942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.031 ms 00:27:59.174 [2024-11-03 10:22:27.456949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.456988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.456995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:59.174 [2024-11-03 10:22:27.457002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:59.174 [2024-11-03 10:22:27.457007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.473741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.473787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:59.174 [2024-11-03 10:22:27.473800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.696 ms 00:27:59.174 [2024-11-03 10:22:27.473810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.473847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.473857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:59.174 [2024-11-03 10:22:27.473866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:59.174 [2024-11-03 10:22:27.473875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.474352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.474380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:59.174 [2024-11-03 10:22:27.474393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:27:59.174 [2024-11-03 10:22:27.474403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.474450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.474460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:59.174 [2024-11-03 10:22:27.474471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:59.174 [2024-11-03 10:22:27.474482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.481628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.481661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:59.174 [2024-11-03 10:22:27.481672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.119 ms 00:27:59.174 [2024-11-03 10:22:27.481681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.484885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:59.174 [2024-11-03 10:22:27.484920] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:59.174 [2024-11-03 10:22:27.484931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.484947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:59.174 [2024-11-03 10:22:27.484956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.136 ms 00:27:59.174 [2024-11-03 10:22:27.484963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.489017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.489047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:59.174 [2024-11-03 10:22:27.489063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.015 ms 00:27:59.174 [2024-11-03 10:22:27.489075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.490780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.490809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:59.174 [2024-11-03 10:22:27.490819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.665 ms 00:27:59.174 [2024-11-03 10:22:27.490826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.492547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.492577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:59.174 [2024-11-03 10:22:27.492585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.686 ms 00:27:59.174 [2024-11-03 10:22:27.492593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.492910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.492932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:59.174 [2024-11-03 10:22:27.492941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:27:59.174 [2024-11-03 10:22:27.492950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.510403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.510440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:59.174 [2024-11-03 10:22:27.510449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.436 ms 00:27:59.174 [2024-11-03 10:22:27.510456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.516302] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:59.174 [2024-11-03 10:22:27.517015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.517038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:59.174 [2024-11-03 10:22:27.517053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.524 ms 00:27:59.174 [2024-11-03 10:22:27.517060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.517114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.517122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:59.174 [2024-11-03 10:22:27.517129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:59.174 [2024-11-03 10:22:27.517135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.517174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.517183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:59.174 [2024-11-03 10:22:27.517190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:59.174 [2024-11-03 10:22:27.517196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.517216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.517223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:59.174 [2024-11-03 10:22:27.517240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:59.174 [2024-11-03 10:22:27.517246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.517275] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:59.174 [2024-11-03 10:22:27.517283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.517290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:59.174 [2024-11-03 10:22:27.517297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:59.174 [2024-11-03 10:22:27.517302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.520653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.520680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:59.174 [2024-11-03 10:22:27.520687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.334 ms 00:27:59.174 [2024-11-03 10:22:27.520694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.520754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.174 [2024-11-03 10:22:27.520762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:59.174 [2024-11-03 10:22:27.520768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:59.174 [2024-11-03 10:22:27.520775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.174 [2024-11-03 10:22:27.521678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3532.724 ms, result 0 00:27:59.433 [2024-11-03 10:22:27.537418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.433 [2024-11-03 10:22:27.553414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:59.433 [2024-11-03 10:22:27.561517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.433 10:22:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.433 10:22:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:59.433 10:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:59.433 10:22:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:59.433 10:22:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:59.433 [2024-11-03 10:22:27.793533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.433 [2024-11-03 10:22:27.793561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:59.433 [2024-11-03 10:22:27.793574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:59.433 [2024-11-03 10:22:27.793580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.433 [2024-11-03 10:22:27.793598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.433 [2024-11-03 10:22:27.793605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:59.433 [2024-11-03 10:22:27.793611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:59.433 [2024-11-03 10:22:27.793617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.433 [2024-11-03 10:22:27.793634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.433 [2024-11-03 10:22:27.793640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:59.433 [2024-11-03 10:22:27.793647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:59.433 [2024-11-03 10:22:27.793652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.433 [2024-11-03 10:22:27.793695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.152 ms, result 0 00:27:59.691 true 00:27:59.691 10:22:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:59.691 { 00:27:59.691 "name": "ftl", 00:27:59.691 "properties": [ 00:27:59.691 { 00:27:59.691 "name": "superblock_version", 00:27:59.691 "value": 5, 00:27:59.691 "read-only": true 00:27:59.691 }, 00:27:59.691 { 00:27:59.691 "name": "base_device", 00:27:59.691 "bands": [ 00:27:59.691 { 00:27:59.691 "id": 0, 00:27:59.691 "state": "CLOSED", 00:27:59.691 "validity": 1.0 00:27:59.691 }, 00:27:59.691 { 00:27:59.691 "id": 1, 00:27:59.691 "state": "CLOSED", 00:27:59.691 "validity": 1.0 00:27:59.691 }, 00:27:59.691 { 00:27:59.691 "id": 2, 00:27:59.691 "state": "CLOSED", 00:27:59.691 "validity": 0.007843137254901933 00:27:59.691 }, 00:27:59.691 { 00:27:59.692 "id": 3, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 4, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 5, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 6, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 7, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 8, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 9, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 10, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 11, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 12, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 13, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 14, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 15, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 16, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 17, 00:27:59.692 "state": "FREE", 00:27:59.692 "validity": 0.0 00:27:59.692 } 00:27:59.692 ], 00:27:59.692 "read-only": true 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "name": "cache_device", 00:27:59.692 "type": "bdev", 00:27:59.692 "chunks": [ 00:27:59.692 { 00:27:59.692 "id": 0, 00:27:59.692 "state": "INACTIVE", 00:27:59.692 "utilization": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 1, 00:27:59.692 "state": "OPEN", 00:27:59.692 "utilization": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 2, 00:27:59.692 "state": "OPEN", 00:27:59.692 "utilization": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 3, 00:27:59.692 "state": "FREE", 00:27:59.692 "utilization": 0.0 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "id": 4, 00:27:59.692 "state": "FREE", 00:27:59.692 "utilization": 0.0 00:27:59.692 } 00:27:59.692 ], 00:27:59.692 "read-only": true 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "name": "verbose_mode", 00:27:59.692 "value": true, 00:27:59.692 "unit": "", 00:27:59.692 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:59.692 }, 00:27:59.692 { 00:27:59.692 "name": "prep_upgrade_on_shutdown", 00:27:59.692 "value": false, 00:27:59.692 "unit": "", 00:27:59.692 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:59.692 } 00:27:59.692 ] 00:27:59.692 } 00:27:59.692 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:59.692 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:59.692 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:59.950 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:59.950 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:59.950 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:59.950 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:59.950 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:00.209 Validate MD5 checksum, iteration 1 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:00.209 10:22:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.209 [2024-11-03 10:22:28.464633] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:00.209 [2024-11-03 10:22:28.464745] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92429 ] 00:28:00.467 [2024-11-03 10:22:28.600461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.467 [2024-11-03 10:22:28.632546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.847  [2024-11-03T10:22:31.148Z] Copying: 530/1024 [MB] (530 MBps) [2024-11-03T10:22:31.726Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:28:03.364 00:28:03.626 10:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:03.626 10:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:05.525 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:05.783 Validate MD5 checksum, iteration 2 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=895d5ed50f5cedd09ac23a7bfc55b97e 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 895d5ed50f5cedd09ac23a7bfc55b97e != \8\9\5\d\5\e\d\5\0\f\5\c\e\d\d\0\9\a\c\2\3\a\7\b\f\c\5\5\b\9\7\e ]] 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:05.783 10:22:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.783 [2024-11-03 10:22:33.952450] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:05.783 [2024-11-03 10:22:33.952684] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92490 ] 00:28:05.783 [2024-11-03 10:22:34.088482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.783 [2024-11-03 10:22:34.120651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.164  [2024-11-03T10:22:36.463Z] Copying: 502/1024 [MB] (502 MBps) [2024-11-03T10:22:37.032Z] Copying: 1024/1024 [MB] (average 540 MBps) 00:28:08.670 00:28:08.670 10:22:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:08.670 10:22:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6f5945ee997b84d29261bef086189817 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6f5945ee997b84d29261bef086189817 != \6\f\5\9\4\5\e\e\9\9\7\b\8\4\d\2\9\2\6\1\b\e\f\0\8\6\1\8\9\8\1\7 ]] 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92360 ]] 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92360 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:10.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92546 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:10.571 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92546 00:28:10.572 10:22:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92546 ']' 00:28:10.572 10:22:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.572 10:22:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.572 10:22:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.572 10:22:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.572 10:22:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:10.572 10:22:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:10.572 [2024-11-03 10:22:38.914910] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:10.572 [2024-11-03 10:22:38.915030] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92546 ] 00:28:10.831 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92360 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:10.831 [2024-11-03 10:22:39.047496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.831 [2024-11-03 10:22:39.088492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.090 [2024-11-03 10:22:39.383790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:11.090 [2024-11-03 10:22:39.383846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:11.349 [2024-11-03 10:22:39.529878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.349 [2024-11-03 10:22:39.529914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:11.349 [2024-11-03 10:22:39.529928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:11.349 [2024-11-03 10:22:39.529935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.349 [2024-11-03 10:22:39.529978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.349 [2024-11-03 10:22:39.529986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:11.350 [2024-11-03 10:22:39.529993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:11.350 [2024-11-03 10:22:39.529998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.530016] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:11.350 [2024-11-03 10:22:39.530203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:11.350 [2024-11-03 10:22:39.530215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.530222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:11.350 [2024-11-03 10:22:39.530241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:28:11.350 [2024-11-03 10:22:39.530248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.530421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:11.350 [2024-11-03 10:22:39.534855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.534885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:11.350 [2024-11-03 10:22:39.534894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.435 ms 00:28:11.350 [2024-11-03 10:22:39.534904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.535857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.535883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:11.350 [2024-11-03 10:22:39.535891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:11.350 [2024-11-03 10:22:39.535898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.536113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.536122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:11.350 [2024-11-03 10:22:39.536131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:28:11.350 [2024-11-03 10:22:39.536137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.536167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.536174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:11.350 [2024-11-03 10:22:39.536180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:11.350 [2024-11-03 10:22:39.536185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.536215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.536222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:11.350 [2024-11-03 10:22:39.536241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:11.350 [2024-11-03 10:22:39.536249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.536268] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:11.350 [2024-11-03 10:22:39.537064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.537084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:11.350 [2024-11-03 10:22:39.537091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.800 ms 00:28:11.350 [2024-11-03 10:22:39.537097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.537120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.537127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:11.350 [2024-11-03 10:22:39.537134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:11.350 [2024-11-03 10:22:39.537143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.537158] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:11.350 [2024-11-03 10:22:39.537175] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:11.350 [2024-11-03 10:22:39.537202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:11.350 [2024-11-03 10:22:39.537214] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:11.350 [2024-11-03 10:22:39.537311] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:11.350 [2024-11-03 10:22:39.537322] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:11.350 [2024-11-03 10:22:39.537332] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:11.350 [2024-11-03 10:22:39.537340] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:11.350 [2024-11-03 10:22:39.537350] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:11.350 [2024-11-03 10:22:39.537360] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:11.350 [2024-11-03 10:22:39.537366] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:11.350 [2024-11-03 10:22:39.537371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:11.350 [2024-11-03 10:22:39.537378] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:11.350 [2024-11-03 10:22:39.537383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.537389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:11.350 [2024-11-03 10:22:39.537395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.227 ms 00:28:11.350 [2024-11-03 10:22:39.537401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.537467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.350 [2024-11-03 10:22:39.537473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:11.350 [2024-11-03 10:22:39.537479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:11.350 [2024-11-03 10:22:39.537487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.350 [2024-11-03 10:22:39.537562] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:11.350 [2024-11-03 10:22:39.537570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:11.350 [2024-11-03 10:22:39.537582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:11.350 [2024-11-03 10:22:39.537588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:11.350 [2024-11-03 10:22:39.537600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:11.350 [2024-11-03 10:22:39.537612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:11.350 [2024-11-03 10:22:39.537617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:11.350 [2024-11-03 10:22:39.537622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:11.350 [2024-11-03 10:22:39.537633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:11.350 [2024-11-03 10:22:39.537638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:11.350 [2024-11-03 10:22:39.537649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:11.350 [2024-11-03 10:22:39.537657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:11.350 [2024-11-03 10:22:39.537667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:11.350 [2024-11-03 10:22:39.537671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:11.350 [2024-11-03 10:22:39.537682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:11.350 [2024-11-03 10:22:39.537687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.350 [2024-11-03 10:22:39.537692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:11.350 [2024-11-03 10:22:39.537697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:11.350 [2024-11-03 10:22:39.537702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.350 [2024-11-03 10:22:39.537707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:11.350 [2024-11-03 10:22:39.537712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:11.350 [2024-11-03 10:22:39.537718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.350 [2024-11-03 10:22:39.537724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:11.350 [2024-11-03 10:22:39.537729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:11.350 [2024-11-03 10:22:39.537736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.350 [2024-11-03 10:22:39.537743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:11.350 [2024-11-03 10:22:39.537749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:11.350 [2024-11-03 10:22:39.537755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:11.350 [2024-11-03 10:22:39.537767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:11.350 [2024-11-03 10:22:39.537773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:11.350 [2024-11-03 10:22:39.537785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:11.350 [2024-11-03 10:22:39.537802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:11.350 [2024-11-03 10:22:39.537807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.350 [2024-11-03 10:22:39.537813] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:11.350 [2024-11-03 10:22:39.537820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:11.350 [2024-11-03 10:22:39.537828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:11.351 [2024-11-03 10:22:39.537834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.351 [2024-11-03 10:22:39.537847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:11.351 [2024-11-03 10:22:39.537853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:11.351 [2024-11-03 10:22:39.537859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:11.351 [2024-11-03 10:22:39.537865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:11.351 [2024-11-03 10:22:39.537870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:11.351 [2024-11-03 10:22:39.537876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:11.351 [2024-11-03 10:22:39.537883] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:11.351 [2024-11-03 10:22:39.537890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:11.351 [2024-11-03 10:22:39.537903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:11.351 [2024-11-03 10:22:39.537922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:11.351 [2024-11-03 10:22:39.537928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:11.351 [2024-11-03 10:22:39.537934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:11.351 [2024-11-03 10:22:39.537941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.537981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:11.351 [2024-11-03 10:22:39.537995] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:11.351 [2024-11-03 10:22:39.538002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.538008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:11.351 [2024-11-03 10:22:39.538015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:11.351 [2024-11-03 10:22:39.538025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:11.351 [2024-11-03 10:22:39.538032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:11.351 [2024-11-03 10:22:39.538039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.538045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:11.351 [2024-11-03 10:22:39.538052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:28:11.351 [2024-11-03 10:22:39.538061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.546570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.546597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:11.351 [2024-11-03 10:22:39.546609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.457 ms 00:28:11.351 [2024-11-03 10:22:39.546615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.546643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.546653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:11.351 [2024-11-03 10:22:39.546660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:11.351 [2024-11-03 10:22:39.546667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.563698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.563731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:11.351 [2024-11-03 10:22:39.563741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.989 ms 00:28:11.351 [2024-11-03 10:22:39.563748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.563778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.563789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:11.351 [2024-11-03 10:22:39.563796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:11.351 [2024-11-03 10:22:39.563805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.563887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.563899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:11.351 [2024-11-03 10:22:39.563906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:11.351 [2024-11-03 10:22:39.563917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.563949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.563956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:11.351 [2024-11-03 10:22:39.563962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:11.351 [2024-11-03 10:22:39.563967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.570840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.570875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:11.351 [2024-11-03 10:22:39.570886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.855 ms 00:28:11.351 [2024-11-03 10:22:39.570896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.570993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.571006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:11.351 [2024-11-03 10:22:39.571017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:11.351 [2024-11-03 10:22:39.571027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.576277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.576317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:11.351 [2024-11-03 10:22:39.576329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.227 ms 00:28:11.351 [2024-11-03 10:22:39.576339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.577841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.577884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:11.351 [2024-11-03 10:22:39.577896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:28:11.351 [2024-11-03 10:22:39.577905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.595632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.595668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:11.351 [2024-11-03 10:22:39.595685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.698 ms 00:28:11.351 [2024-11-03 10:22:39.595695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.595811] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:11.351 [2024-11-03 10:22:39.595902] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:11.351 [2024-11-03 10:22:39.595988] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:11.351 [2024-11-03 10:22:39.596070] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:11.351 [2024-11-03 10:22:39.596078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.596085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:11.351 [2024-11-03 10:22:39.596092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:28:11.351 [2024-11-03 10:22:39.596099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.596129] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:11.351 [2024-11-03 10:22:39.596142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.596148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:11.351 [2024-11-03 10:22:39.596155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:11.351 [2024-11-03 10:22:39.596161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.599443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.599472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:11.351 [2024-11-03 10:22:39.599480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.264 ms 00:28:11.351 [2024-11-03 10:22:39.599487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.600012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.600029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:11.351 [2024-11-03 10:22:39.600037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:11.351 [2024-11-03 10:22:39.600043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.351 [2024-11-03 10:22:39.600097] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:11.351 [2024-11-03 10:22:39.600304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.351 [2024-11-03 10:22:39.600316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:11.351 [2024-11-03 10:22:39.600324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:28:11.351 [2024-11-03 10:22:39.600330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.921 [2024-11-03 10:22:40.215700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.921 [2024-11-03 10:22:40.215816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:11.921 [2024-11-03 10:22:40.215839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 615.113 ms 00:28:11.921 [2024-11-03 10:22:40.215850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.921 [2024-11-03 10:22:40.218133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.921 [2024-11-03 10:22:40.218187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:11.921 [2024-11-03 10:22:40.218201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.528 ms 00:28:11.921 [2024-11-03 10:22:40.218211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.921 [2024-11-03 10:22:40.219155] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:11.921 [2024-11-03 10:22:40.219204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.921 [2024-11-03 10:22:40.219216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:11.921 [2024-11-03 10:22:40.219249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.930 ms 00:28:11.921 [2024-11-03 10:22:40.219260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.921 [2024-11-03 10:22:40.219309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.921 [2024-11-03 10:22:40.219322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:11.921 [2024-11-03 10:22:40.219331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:11.921 [2024-11-03 10:22:40.219340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.921 [2024-11-03 10:22:40.219387] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 619.281 ms, result 0 00:28:11.921 [2024-11-03 10:22:40.219449] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:11.921 [2024-11-03 10:22:40.219724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.921 [2024-11-03 10:22:40.219737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:11.921 [2024-11-03 10:22:40.219747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:28:11.921 [2024-11-03 10:22:40.219755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.881427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.881510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:12.862 [2024-11-03 10:22:40.881527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 660.972 ms 00:28:12.862 [2024-11-03 10:22:40.881536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.883254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.883294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:12.862 [2024-11-03 10:22:40.883305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.215 ms 00:28:12.862 [2024-11-03 10:22:40.883313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.883855] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:12.862 [2024-11-03 10:22:40.883888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.883897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:12.862 [2024-11-03 10:22:40.883908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.546 ms 00:28:12.862 [2024-11-03 10:22:40.883916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.883949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.883959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:12.862 [2024-11-03 10:22:40.883968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:12.862 [2024-11-03 10:22:40.883976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.884013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 664.574 ms, result 0 00:28:12.862 [2024-11-03 10:22:40.884063] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:12.862 [2024-11-03 10:22:40.884073] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:12.862 [2024-11-03 10:22:40.884085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.884095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:12.862 [2024-11-03 10:22:40.884104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1283.999 ms 00:28:12.862 [2024-11-03 10:22:40.884112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.884144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.884154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:12.862 [2024-11-03 10:22:40.884167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:12.862 [2024-11-03 10:22:40.884175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.893533] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:12.862 [2024-11-03 10:22:40.893646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.893658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:12.862 [2024-11-03 10:22:40.893669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.455 ms 00:28:12.862 [2024-11-03 10:22:40.893677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.894387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.894410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:12.862 [2024-11-03 10:22:40.894420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:28:12.862 [2024-11-03 10:22:40.894428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.896662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.896689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:12.862 [2024-11-03 10:22:40.896699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.216 ms 00:28:12.862 [2024-11-03 10:22:40.896707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.896753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.896763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:12.862 [2024-11-03 10:22:40.896772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:12.862 [2024-11-03 10:22:40.896781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.896898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.896919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:12.862 [2024-11-03 10:22:40.896928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:12.862 [2024-11-03 10:22:40.896936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.896962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.896971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:12.862 [2024-11-03 10:22:40.896984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:12.862 [2024-11-03 10:22:40.896992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.897030] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:12.862 [2024-11-03 10:22:40.897041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.897055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:12.862 [2024-11-03 10:22:40.897064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:12.862 [2024-11-03 10:22:40.897072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.897127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.862 [2024-11-03 10:22:40.897140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:12.862 [2024-11-03 10:22:40.897149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:12.862 [2024-11-03 10:22:40.897157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.862 [2024-11-03 10:22:40.898489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1368.042 ms, result 0 00:28:12.862 [2024-11-03 10:22:40.914135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.862 [2024-11-03 10:22:40.930083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:12.862 [2024-11-03 10:22:40.938206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:13.121 Validate MD5 checksum, iteration 1 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:13.121 10:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:13.379 [2024-11-03 10:22:41.510077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:13.379 [2024-11-03 10:22:41.510187] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92575 ] 00:28:13.379 [2024-11-03 10:22:41.643732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.379 [2024-11-03 10:22:41.676021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.757  [2024-11-03T10:22:44.092Z] Copying: 543/1024 [MB] (543 MBps) [2024-11-03T10:22:44.661Z] Copying: 1024/1024 [MB] (average 567 MBps) 00:28:16.299 00:28:16.299 10:22:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:16.299 10:22:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:18.828 Validate MD5 checksum, iteration 2 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=895d5ed50f5cedd09ac23a7bfc55b97e 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 895d5ed50f5cedd09ac23a7bfc55b97e != \8\9\5\d\5\e\d\5\0\f\5\c\e\d\d\0\9\a\c\2\3\a\7\b\f\c\5\5\b\9\7\e ]] 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:18.828 10:22:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:18.828 [2024-11-03 10:22:46.642645] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:18.828 [2024-11-03 10:22:46.642754] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92635 ] 00:28:18.828 [2024-11-03 10:22:46.777385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.828 [2024-11-03 10:22:46.809516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.206  [2024-11-03T10:22:48.827Z] Copying: 631/1024 [MB] (631 MBps) [2024-11-03T10:22:53.031Z] Copying: 1024/1024 [MB] (average 624 MBps) 00:28:24.669 00:28:24.669 10:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:24.669 10:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6f5945ee997b84d29261bef086189817 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6f5945ee997b84d29261bef086189817 != \6\f\5\9\4\5\e\e\9\9\7\b\8\4\d\2\9\2\6\1\b\e\f\0\8\6\1\8\9\8\1\7 ]] 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:26.043 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92546 ]] 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92546 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92546 ']' 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92546 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92546 00:28:26.302 killing process with pid 92546 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92546' 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92546 00:28:26.302 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92546 00:28:26.302 [2024-11-03 10:22:54.645987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:26.302 [2024-11-03 10:22:54.649605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.649635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:26.302 [2024-11-03 10:22:54.649646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:26.302 [2024-11-03 10:22:54.649653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.649671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:26.302 [2024-11-03 10:22:54.650182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.650203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:26.302 [2024-11-03 10:22:54.650212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.501 ms 00:28:26.302 [2024-11-03 10:22:54.650219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.650440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.650449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:26.302 [2024-11-03 10:22:54.650457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:28:26.302 [2024-11-03 10:22:54.650464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.651968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.651991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:26.302 [2024-11-03 10:22:54.651999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 00:28:26.302 [2024-11-03 10:22:54.652005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.652876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.652898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:26.302 [2024-11-03 10:22:54.652906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.844 ms 00:28:26.302 [2024-11-03 10:22:54.652912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.655301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.655324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:26.302 [2024-11-03 10:22:54.655331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.357 ms 00:28:26.302 [2024-11-03 10:22:54.655337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.657033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.657071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:26.302 [2024-11-03 10:22:54.657080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.665 ms 00:28:26.302 [2024-11-03 10:22:54.657087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.657154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.657162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:26.302 [2024-11-03 10:22:54.657169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:26.302 [2024-11-03 10:22:54.657175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.659406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.659428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:26.302 [2024-11-03 10:22:54.659435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.217 ms 00:28:26.302 [2024-11-03 10:22:54.659441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.302 [2024-11-03 10:22:54.661308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.302 [2024-11-03 10:22:54.661330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:26.302 [2024-11-03 10:22:54.661337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.842 ms 00:28:26.302 [2024-11-03 10:22:54.661342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.562 [2024-11-03 10:22:54.663206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.562 [2024-11-03 10:22:54.663237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:26.562 [2024-11-03 10:22:54.663244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.831 ms 00:28:26.562 [2024-11-03 10:22:54.663250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.562 [2024-11-03 10:22:54.664853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.562 [2024-11-03 10:22:54.664875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:26.562 [2024-11-03 10:22:54.664882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.555 ms 00:28:26.563 [2024-11-03 10:22:54.664888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.664912] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:26.563 [2024-11-03 10:22:54.664924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:26.563 [2024-11-03 10:22:54.664935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:26.563 [2024-11-03 10:22:54.664942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:26.563 [2024-11-03 10:22:54.664949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.664955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.664961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.664967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.664973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.664979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.664985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.664991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.664997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.665003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.665009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.665015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.665020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.665027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.665032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:26.563 [2024-11-03 10:22:54.665040] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:26.563 [2024-11-03 10:22:54.665047] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 47664418-c017-4bf2-9d98-9954191b2040 00:28:26.563 [2024-11-03 10:22:54.665054] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:26.563 [2024-11-03 10:22:54.665059] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:26.563 [2024-11-03 10:22:54.665066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:26.563 [2024-11-03 10:22:54.665072] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:26.563 [2024-11-03 10:22:54.665077] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:26.563 [2024-11-03 10:22:54.665084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:26.563 [2024-11-03 10:22:54.665090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:26.563 [2024-11-03 10:22:54.665096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:26.563 [2024-11-03 10:22:54.665102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:26.563 [2024-11-03 10:22:54.665109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.563 [2024-11-03 10:22:54.665115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:26.563 [2024-11-03 10:22:54.665121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:28:26.563 [2024-11-03 10:22:54.665129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.666781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.563 [2024-11-03 10:22:54.666801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:26.563 [2024-11-03 10:22:54.666809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.638 ms 00:28:26.563 [2024-11-03 10:22:54.666816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.666904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:26.563 [2024-11-03 10:22:54.666912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:26.563 [2024-11-03 10:22:54.666920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:28:26.563 [2024-11-03 10:22:54.666927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.672870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.672891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:26.563 [2024-11-03 10:22:54.672899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.672906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.672928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.672935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:26.563 [2024-11-03 10:22:54.672948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.672955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.672998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.673006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:26.563 [2024-11-03 10:22:54.673013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.673019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.673033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.673040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:26.563 [2024-11-03 10:22:54.673047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.673055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.683558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.683584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:26.563 [2024-11-03 10:22:54.683593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.683600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.691949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.691976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:26.563 [2024-11-03 10:22:54.691989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.691995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.692058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.692066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:26.563 [2024-11-03 10:22:54.692072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.692078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.692107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.692115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:26.563 [2024-11-03 10:22:54.692125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.692135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.692195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.692211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:26.563 [2024-11-03 10:22:54.692218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.692235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.692265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.692272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:26.563 [2024-11-03 10:22:54.692279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.692286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.692326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.692334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:26.563 [2024-11-03 10:22:54.692340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.692346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.692389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:26.563 [2024-11-03 10:22:54.692398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:26.563 [2024-11-03 10:22:54.692405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:26.563 [2024-11-03 10:22:54.692410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:26.563 [2024-11-03 10:22:54.692526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 42.895 ms, result 0 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:26.563 Remove shared memory files 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92360 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:26.563 10:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:26.563 ************************************ 00:28:26.563 END TEST ftl_upgrade_shutdown 00:28:26.563 ************************************ 00:28:26.563 00:28:26.563 real 1m14.228s 00:28:26.564 user 1m38.360s 00:28:26.564 sys 0m19.920s 00:28:26.564 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.564 10:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.824 10:22:54 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:26.824 10:22:54 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:26.824 10:22:54 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:26.824 10:22:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:26.824 10:22:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:26.824 ************************************ 00:28:26.824 START TEST ftl_restore_fast 00:28:26.824 ************************************ 00:28:26.824 10:22:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:26.824 * Looking for test storage... 00:28:26.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:26.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.824 --rc genhtml_branch_coverage=1 00:28:26.824 --rc genhtml_function_coverage=1 00:28:26.824 --rc genhtml_legend=1 00:28:26.824 --rc geninfo_all_blocks=1 00:28:26.824 --rc geninfo_unexecuted_blocks=1 00:28:26.824 00:28:26.824 ' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:26.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.824 --rc genhtml_branch_coverage=1 00:28:26.824 --rc genhtml_function_coverage=1 00:28:26.824 --rc genhtml_legend=1 00:28:26.824 --rc geninfo_all_blocks=1 00:28:26.824 --rc geninfo_unexecuted_blocks=1 00:28:26.824 00:28:26.824 ' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:26.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.824 --rc genhtml_branch_coverage=1 00:28:26.824 --rc genhtml_function_coverage=1 00:28:26.824 --rc genhtml_legend=1 00:28:26.824 --rc geninfo_all_blocks=1 00:28:26.824 --rc geninfo_unexecuted_blocks=1 00:28:26.824 00:28:26.824 ' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:26.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.824 --rc genhtml_branch_coverage=1 00:28:26.824 --rc genhtml_function_coverage=1 00:28:26.824 --rc genhtml_legend=1 00:28:26.824 --rc geninfo_all_blocks=1 00:28:26.824 --rc geninfo_unexecuted_blocks=1 00:28:26.824 00:28:26.824 ' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:26.824 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.vT9FsrjTi9 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92803 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92803 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 92803 ']' 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.825 10:22:55 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:27.086 [2024-11-03 10:22:55.234456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:27.086 [2024-11-03 10:22:55.234623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92803 ] 00:28:27.086 [2024-11-03 10:22:55.373354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.086 [2024-11-03 10:22:55.446444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:28.031 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:28.293 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:28.293 { 00:28:28.293 "name": "nvme0n1", 00:28:28.293 "aliases": [ 00:28:28.293 "b08ed7dc-42a5-4879-9a93-552539f7bf55" 00:28:28.293 ], 00:28:28.293 "product_name": "NVMe disk", 00:28:28.293 "block_size": 4096, 00:28:28.293 "num_blocks": 1310720, 00:28:28.293 "uuid": "b08ed7dc-42a5-4879-9a93-552539f7bf55", 00:28:28.293 "numa_id": -1, 00:28:28.293 "assigned_rate_limits": { 00:28:28.293 "rw_ios_per_sec": 0, 00:28:28.293 "rw_mbytes_per_sec": 0, 00:28:28.293 "r_mbytes_per_sec": 0, 00:28:28.293 "w_mbytes_per_sec": 0 00:28:28.293 }, 00:28:28.293 "claimed": true, 00:28:28.293 "claim_type": "read_many_write_one", 00:28:28.293 "zoned": false, 00:28:28.293 "supported_io_types": { 00:28:28.293 "read": true, 00:28:28.293 "write": true, 00:28:28.293 "unmap": true, 00:28:28.293 "flush": true, 00:28:28.293 "reset": true, 00:28:28.293 "nvme_admin": true, 00:28:28.293 "nvme_io": true, 00:28:28.293 "nvme_io_md": false, 00:28:28.293 "write_zeroes": true, 00:28:28.293 "zcopy": false, 00:28:28.293 "get_zone_info": false, 00:28:28.293 "zone_management": false, 00:28:28.293 "zone_append": false, 00:28:28.293 "compare": true, 00:28:28.293 "compare_and_write": false, 00:28:28.293 "abort": true, 00:28:28.293 "seek_hole": false, 00:28:28.293 "seek_data": false, 00:28:28.293 "copy": true, 00:28:28.293 "nvme_iov_md": false 00:28:28.293 }, 00:28:28.293 "driver_specific": { 00:28:28.293 "nvme": [ 00:28:28.293 { 00:28:28.293 "pci_address": "0000:00:11.0", 00:28:28.293 "trid": { 00:28:28.293 "trtype": "PCIe", 00:28:28.293 "traddr": "0000:00:11.0" 00:28:28.293 }, 00:28:28.293 "ctrlr_data": { 00:28:28.293 "cntlid": 0, 00:28:28.293 "vendor_id": "0x1b36", 00:28:28.293 "model_number": "QEMU NVMe Ctrl", 00:28:28.293 "serial_number": "12341", 00:28:28.293 "firmware_revision": "8.0.0", 00:28:28.293 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:28.293 "oacs": { 00:28:28.293 "security": 0, 00:28:28.293 "format": 1, 00:28:28.293 "firmware": 0, 00:28:28.293 "ns_manage": 1 00:28:28.293 }, 00:28:28.293 "multi_ctrlr": false, 00:28:28.293 "ana_reporting": false 00:28:28.293 }, 00:28:28.293 "vs": { 00:28:28.293 "nvme_version": "1.4" 00:28:28.293 }, 00:28:28.293 "ns_data": { 00:28:28.293 "id": 1, 00:28:28.293 "can_share": false 00:28:28.293 } 00:28:28.293 } 00:28:28.293 ], 00:28:28.293 "mp_policy": "active_passive" 00:28:28.293 } 00:28:28.293 } 00:28:28.293 ]' 00:28:28.293 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:28.293 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:28.293 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=448274d9-4c6f-44bf-a99c-c209ef7b7a50 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:28.552 10:22:56 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 448274d9-4c6f-44bf-a99c-c209ef7b7a50 00:28:28.810 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:29.068 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=aaac26bd-af4f-487b-ab58-e84255148848 00:28:29.068 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aaac26bd-af4f-487b-ab58-e84255148848 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:29.326 { 00:28:29.326 "name": "244787ab-bac3-4f86-9d74-0e2fd9e50f01", 00:28:29.326 "aliases": [ 00:28:29.326 "lvs/nvme0n1p0" 00:28:29.326 ], 00:28:29.326 "product_name": "Logical Volume", 00:28:29.326 "block_size": 4096, 00:28:29.326 "num_blocks": 26476544, 00:28:29.326 "uuid": "244787ab-bac3-4f86-9d74-0e2fd9e50f01", 00:28:29.326 "assigned_rate_limits": { 00:28:29.326 "rw_ios_per_sec": 0, 00:28:29.326 "rw_mbytes_per_sec": 0, 00:28:29.326 "r_mbytes_per_sec": 0, 00:28:29.326 "w_mbytes_per_sec": 0 00:28:29.326 }, 00:28:29.326 "claimed": false, 00:28:29.326 "zoned": false, 00:28:29.326 "supported_io_types": { 00:28:29.326 "read": true, 00:28:29.326 "write": true, 00:28:29.326 "unmap": true, 00:28:29.326 "flush": false, 00:28:29.326 "reset": true, 00:28:29.326 "nvme_admin": false, 00:28:29.326 "nvme_io": false, 00:28:29.326 "nvme_io_md": false, 00:28:29.326 "write_zeroes": true, 00:28:29.326 "zcopy": false, 00:28:29.326 "get_zone_info": false, 00:28:29.326 "zone_management": false, 00:28:29.326 "zone_append": false, 00:28:29.326 "compare": false, 00:28:29.326 "compare_and_write": false, 00:28:29.326 "abort": false, 00:28:29.326 "seek_hole": true, 00:28:29.326 "seek_data": true, 00:28:29.326 "copy": false, 00:28:29.326 "nvme_iov_md": false 00:28:29.326 }, 00:28:29.326 "driver_specific": { 00:28:29.326 "lvol": { 00:28:29.326 "lvol_store_uuid": "aaac26bd-af4f-487b-ab58-e84255148848", 00:28:29.326 "base_bdev": "nvme0n1", 00:28:29.326 "thin_provision": true, 00:28:29.326 "num_allocated_clusters": 0, 00:28:29.326 "snapshot": false, 00:28:29.326 "clone": false, 00:28:29.326 "esnap_clone": false 00:28:29.326 } 00:28:29.326 } 00:28:29.326 } 00:28:29.326 ]' 00:28:29.326 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:29.584 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:29.585 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:29.585 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:29.585 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:29.585 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:29.585 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:29.585 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:29.585 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:29.843 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:29.843 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:29.843 10:22:57 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.843 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.843 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:29.843 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:29.843 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:29.843 10:22:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:29.843 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:29.843 { 00:28:29.843 "name": "244787ab-bac3-4f86-9d74-0e2fd9e50f01", 00:28:29.843 "aliases": [ 00:28:29.843 "lvs/nvme0n1p0" 00:28:29.843 ], 00:28:29.843 "product_name": "Logical Volume", 00:28:29.843 "block_size": 4096, 00:28:29.843 "num_blocks": 26476544, 00:28:29.843 "uuid": "244787ab-bac3-4f86-9d74-0e2fd9e50f01", 00:28:29.843 "assigned_rate_limits": { 00:28:29.843 "rw_ios_per_sec": 0, 00:28:29.843 "rw_mbytes_per_sec": 0, 00:28:29.843 "r_mbytes_per_sec": 0, 00:28:29.843 "w_mbytes_per_sec": 0 00:28:29.843 }, 00:28:29.843 "claimed": false, 00:28:29.843 "zoned": false, 00:28:29.843 "supported_io_types": { 00:28:29.843 "read": true, 00:28:29.843 "write": true, 00:28:29.843 "unmap": true, 00:28:29.843 "flush": false, 00:28:29.843 "reset": true, 00:28:29.843 "nvme_admin": false, 00:28:29.843 "nvme_io": false, 00:28:29.843 "nvme_io_md": false, 00:28:29.843 "write_zeroes": true, 00:28:29.843 "zcopy": false, 00:28:29.843 "get_zone_info": false, 00:28:29.843 "zone_management": false, 00:28:29.843 "zone_append": false, 00:28:29.843 "compare": false, 00:28:29.843 "compare_and_write": false, 00:28:29.843 "abort": false, 00:28:29.843 "seek_hole": true, 00:28:29.843 "seek_data": true, 00:28:29.843 "copy": false, 00:28:29.843 "nvme_iov_md": false 00:28:29.843 }, 00:28:29.843 "driver_specific": { 00:28:29.843 "lvol": { 00:28:29.843 "lvol_store_uuid": "aaac26bd-af4f-487b-ab58-e84255148848", 00:28:29.843 "base_bdev": "nvme0n1", 00:28:29.843 "thin_provision": true, 00:28:29.843 "num_allocated_clusters": 0, 00:28:29.843 "snapshot": false, 00:28:29.843 "clone": false, 00:28:29.843 "esnap_clone": false 00:28:29.843 } 00:28:29.843 } 00:28:29.843 } 00:28:29.843 ]' 00:28:29.843 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:30.101 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 244787ab-bac3-4f86-9d74-0e2fd9e50f01 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:30.360 { 00:28:30.360 "name": "244787ab-bac3-4f86-9d74-0e2fd9e50f01", 00:28:30.360 "aliases": [ 00:28:30.360 "lvs/nvme0n1p0" 00:28:30.360 ], 00:28:30.360 "product_name": "Logical Volume", 00:28:30.360 "block_size": 4096, 00:28:30.360 "num_blocks": 26476544, 00:28:30.360 "uuid": "244787ab-bac3-4f86-9d74-0e2fd9e50f01", 00:28:30.360 "assigned_rate_limits": { 00:28:30.360 "rw_ios_per_sec": 0, 00:28:30.360 "rw_mbytes_per_sec": 0, 00:28:30.360 "r_mbytes_per_sec": 0, 00:28:30.360 "w_mbytes_per_sec": 0 00:28:30.360 }, 00:28:30.360 "claimed": false, 00:28:30.360 "zoned": false, 00:28:30.360 "supported_io_types": { 00:28:30.360 "read": true, 00:28:30.360 "write": true, 00:28:30.360 "unmap": true, 00:28:30.360 "flush": false, 00:28:30.360 "reset": true, 00:28:30.360 "nvme_admin": false, 00:28:30.360 "nvme_io": false, 00:28:30.360 "nvme_io_md": false, 00:28:30.360 "write_zeroes": true, 00:28:30.360 "zcopy": false, 00:28:30.360 "get_zone_info": false, 00:28:30.360 "zone_management": false, 00:28:30.360 "zone_append": false, 00:28:30.360 "compare": false, 00:28:30.360 "compare_and_write": false, 00:28:30.360 "abort": false, 00:28:30.360 "seek_hole": true, 00:28:30.360 "seek_data": true, 00:28:30.360 "copy": false, 00:28:30.360 "nvme_iov_md": false 00:28:30.360 }, 00:28:30.360 "driver_specific": { 00:28:30.360 "lvol": { 00:28:30.360 "lvol_store_uuid": "aaac26bd-af4f-487b-ab58-e84255148848", 00:28:30.360 "base_bdev": "nvme0n1", 00:28:30.360 "thin_provision": true, 00:28:30.360 "num_allocated_clusters": 0, 00:28:30.360 "snapshot": false, 00:28:30.360 "clone": false, 00:28:30.360 "esnap_clone": false 00:28:30.360 } 00:28:30.360 } 00:28:30.360 } 00:28:30.360 ]' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 244787ab-bac3-4f86-9d74-0e2fd9e50f01 --l2p_dram_limit 10' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:30.360 10:22:58 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 244787ab-bac3-4f86-9d74-0e2fd9e50f01 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:30.619 [2024-11-03 10:22:58.902060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.902101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:30.619 [2024-11-03 10:22:58.902113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:30.619 [2024-11-03 10:22:58.902124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.902163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.902172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:30.619 [2024-11-03 10:22:58.902179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:30.619 [2024-11-03 10:22:58.902188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.902206] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:30.619 [2024-11-03 10:22:58.902482] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:30.619 [2024-11-03 10:22:58.902504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.902512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:30.619 [2024-11-03 10:22:58.902521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:28:30.619 [2024-11-03 10:22:58.902529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.902552] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6 00:28:30.619 [2024-11-03 10:22:58.903814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.903840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:30.619 [2024-11-03 10:22:58.903849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:30.619 [2024-11-03 10:22:58.903855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.910765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.910791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:30.619 [2024-11-03 10:22:58.910801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:28:30.619 [2024-11-03 10:22:58.910807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.910905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.910913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:30.619 [2024-11-03 10:22:58.910922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:30.619 [2024-11-03 10:22:58.910931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.910964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.910974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:30.619 [2024-11-03 10:22:58.910982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:30.619 [2024-11-03 10:22:58.910988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.911010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:30.619 [2024-11-03 10:22:58.912669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.912695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:30.619 [2024-11-03 10:22:58.912705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:28:30.619 [2024-11-03 10:22:58.912712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.912741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.912749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:30.619 [2024-11-03 10:22:58.912755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:30.619 [2024-11-03 10:22:58.912768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.912787] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:30.619 [2024-11-03 10:22:58.912905] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:30.619 [2024-11-03 10:22:58.912916] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:30.619 [2024-11-03 10:22:58.912927] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:30.619 [2024-11-03 10:22:58.912935] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:30.619 [2024-11-03 10:22:58.912944] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:30.619 [2024-11-03 10:22:58.912950] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:30.619 [2024-11-03 10:22:58.912962] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:30.619 [2024-11-03 10:22:58.912968] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:30.619 [2024-11-03 10:22:58.912975] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:30.619 [2024-11-03 10:22:58.912983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.912990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:30.619 [2024-11-03 10:22:58.912996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:28:30.619 [2024-11-03 10:22:58.913003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.619 [2024-11-03 10:22:58.913067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.619 [2024-11-03 10:22:58.913083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:30.620 [2024-11-03 10:22:58.913089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:30.620 [2024-11-03 10:22:58.913097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.620 [2024-11-03 10:22:58.913171] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:30.620 [2024-11-03 10:22:58.913190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:30.620 [2024-11-03 10:22:58.913199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:30.620 [2024-11-03 10:22:58.913220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:30.620 [2024-11-03 10:22:58.913250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.620 [2024-11-03 10:22:58.913262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:30.620 [2024-11-03 10:22:58.913269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:30.620 [2024-11-03 10:22:58.913274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.620 [2024-11-03 10:22:58.913282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:30.620 [2024-11-03 10:22:58.913288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:30.620 [2024-11-03 10:22:58.913294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:30.620 [2024-11-03 10:22:58.913309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:30.620 [2024-11-03 10:22:58.913328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:30.620 [2024-11-03 10:22:58.913350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:30.620 [2024-11-03 10:22:58.913370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:30.620 [2024-11-03 10:22:58.913396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:30.620 [2024-11-03 10:22:58.913417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.620 [2024-11-03 10:22:58.913430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:30.620 [2024-11-03 10:22:58.913437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:30.620 [2024-11-03 10:22:58.913443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.620 [2024-11-03 10:22:58.913450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:30.620 [2024-11-03 10:22:58.913456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:30.620 [2024-11-03 10:22:58.913463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:30.620 [2024-11-03 10:22:58.913476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:30.620 [2024-11-03 10:22:58.913482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913490] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:30.620 [2024-11-03 10:22:58.913500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:30.620 [2024-11-03 10:22:58.913509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.620 [2024-11-03 10:22:58.913524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:30.620 [2024-11-03 10:22:58.913531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:30.620 [2024-11-03 10:22:58.913539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:30.620 [2024-11-03 10:22:58.913546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:30.620 [2024-11-03 10:22:58.913553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:30.620 [2024-11-03 10:22:58.913559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:30.620 [2024-11-03 10:22:58.913570] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:30.620 [2024-11-03 10:22:58.913578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.620 [2024-11-03 10:22:58.913588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:30.620 [2024-11-03 10:22:58.913595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:30.620 [2024-11-03 10:22:58.913602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:30.620 [2024-11-03 10:22:58.913609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:30.620 [2024-11-03 10:22:58.913618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:30.620 [2024-11-03 10:22:58.913624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:30.620 [2024-11-03 10:22:58.913635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:30.620 [2024-11-03 10:22:58.913641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:30.620 [2024-11-03 10:22:58.913649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:30.620 [2024-11-03 10:22:58.913656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:30.620 [2024-11-03 10:22:58.913665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:30.620 [2024-11-03 10:22:58.913671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:30.620 [2024-11-03 10:22:58.913680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:30.620 [2024-11-03 10:22:58.913687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:30.620 [2024-11-03 10:22:58.913694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:30.620 [2024-11-03 10:22:58.913703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.620 [2024-11-03 10:22:58.913710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:30.620 [2024-11-03 10:22:58.913716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:30.620 [2024-11-03 10:22:58.913723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:30.620 [2024-11-03 10:22:58.913728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:30.620 [2024-11-03 10:22:58.913735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.620 [2024-11-03 10:22:58.913740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:30.620 [2024-11-03 10:22:58.913749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:28:30.620 [2024-11-03 10:22:58.913754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.620 [2024-11-03 10:22:58.913787] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:30.620 [2024-11-03 10:22:58.913796] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:34.816 [2024-11-03 10:23:02.534338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.534406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:34.816 [2024-11-03 10:23:02.534432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3620.533 ms 00:28:34.816 [2024-11-03 10:23:02.534440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.545132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.545172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:34.816 [2024-11-03 10:23:02.545185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.607 ms 00:28:34.816 [2024-11-03 10:23:02.545192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.545295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.545304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:34.816 [2024-11-03 10:23:02.545316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:34.816 [2024-11-03 10:23:02.545322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.554813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.554847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:34.816 [2024-11-03 10:23:02.554857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.458 ms 00:28:34.816 [2024-11-03 10:23:02.554863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.554889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.554896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:34.816 [2024-11-03 10:23:02.554907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:34.816 [2024-11-03 10:23:02.554913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.555329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.555350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:34.816 [2024-11-03 10:23:02.555359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:28:34.816 [2024-11-03 10:23:02.555367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.555462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.555471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:34.816 [2024-11-03 10:23:02.555479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:34.816 [2024-11-03 10:23:02.555489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.575020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.575076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:34.816 [2024-11-03 10:23:02.575098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.496 ms 00:28:34.816 [2024-11-03 10:23:02.575110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.586497] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:34.816 [2024-11-03 10:23:02.589421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.589450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:34.816 [2024-11-03 10:23:02.589459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.153 ms 00:28:34.816 [2024-11-03 10:23:02.589467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.659120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.659162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:34.816 [2024-11-03 10:23:02.659174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.630 ms 00:28:34.816 [2024-11-03 10:23:02.659184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.659350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.659363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:34.816 [2024-11-03 10:23:02.659370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:28:34.816 [2024-11-03 10:23:02.659378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.663116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.663148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:34.816 [2024-11-03 10:23:02.663156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.723 ms 00:28:34.816 [2024-11-03 10:23:02.663164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.666252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.666281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:34.816 [2024-11-03 10:23:02.666289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.057 ms 00:28:34.816 [2024-11-03 10:23:02.666297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.666556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.666567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:34.816 [2024-11-03 10:23:02.666574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:28:34.816 [2024-11-03 10:23:02.666584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.700449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.700482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:34.816 [2024-11-03 10:23:02.700491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.849 ms 00:28:34.816 [2024-11-03 10:23:02.700500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.705221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.705262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:34.816 [2024-11-03 10:23:02.705271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.672 ms 00:28:34.816 [2024-11-03 10:23:02.705279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.708902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.708930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:34.816 [2024-11-03 10:23:02.708938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.595 ms 00:28:34.816 [2024-11-03 10:23:02.708945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.713109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.713141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:34.816 [2024-11-03 10:23:02.713148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:28:34.816 [2024-11-03 10:23:02.713157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.713188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.713198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:34.816 [2024-11-03 10:23:02.713205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:34.816 [2024-11-03 10:23:02.713213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.713280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.816 [2024-11-03 10:23:02.713291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:34.816 [2024-11-03 10:23:02.713298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:34.816 [2024-11-03 10:23:02.713307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.816 [2024-11-03 10:23:02.714134] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3811.685 ms, result 0 00:28:34.816 { 00:28:34.816 "name": "ftl0", 00:28:34.816 "uuid": "48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6" 00:28:34.816 } 00:28:34.817 10:23:02 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:34.817 10:23:02 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:34.817 10:23:02 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:34.817 10:23:02 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:34.817 [2024-11-03 10:23:03.119128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.119161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:34.817 [2024-11-03 10:23:03.119171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:34.817 [2024-11-03 10:23:03.119178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.119200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:34.817 [2024-11-03 10:23:03.119747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.119772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:34.817 [2024-11-03 10:23:03.119780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:28:34.817 [2024-11-03 10:23:03.119789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.119990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.120007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:34.817 [2024-11-03 10:23:03.120015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:28:34.817 [2024-11-03 10:23:03.120024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.122444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.122465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:34.817 [2024-11-03 10:23:03.122473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:28:34.817 [2024-11-03 10:23:03.122481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.127095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.127120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:34.817 [2024-11-03 10:23:03.127128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.601 ms 00:28:34.817 [2024-11-03 10:23:03.127136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.129585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.129617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:34.817 [2024-11-03 10:23:03.129625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:28:34.817 [2024-11-03 10:23:03.129632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.134981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.135012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:34.817 [2024-11-03 10:23:03.135020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.324 ms 00:28:34.817 [2024-11-03 10:23:03.135028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.135119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.135133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:34.817 [2024-11-03 10:23:03.135140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:34.817 [2024-11-03 10:23:03.135148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.138080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.138173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:34.817 [2024-11-03 10:23:03.138198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:28:34.817 [2024-11-03 10:23:03.138219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.141013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.141100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:34.817 [2024-11-03 10:23:03.141126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:28:34.817 [2024-11-03 10:23:03.141152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.143845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.143928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:34.817 [2024-11-03 10:23:03.143957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:28:34.817 [2024-11-03 10:23:03.143990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.146770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.817 [2024-11-03 10:23:03.146847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:34.817 [2024-11-03 10:23:03.146887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:28:34.817 [2024-11-03 10:23:03.146913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.817 [2024-11-03 10:23:03.146993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:34.817 [2024-11-03 10:23:03.147037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.147968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:34.817 [2024-11-03 10:23:03.148466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.148997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:34.818 [2024-11-03 10:23:03.149188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:34.818 [2024-11-03 10:23:03.149197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6 00:28:34.818 [2024-11-03 10:23:03.149207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:34.818 [2024-11-03 10:23:03.149215] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:34.818 [2024-11-03 10:23:03.149235] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:34.818 [2024-11-03 10:23:03.149243] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:34.818 [2024-11-03 10:23:03.149253] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:34.818 [2024-11-03 10:23:03.149260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:34.818 [2024-11-03 10:23:03.149271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:34.818 [2024-11-03 10:23:03.149278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:34.818 [2024-11-03 10:23:03.149286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:34.818 [2024-11-03 10:23:03.149293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-03 10:23:03.149302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:34.818 [2024-11-03 10:23:03.149312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:28:34.818 [2024-11-03 10:23:03.149322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-03 10:23:03.150958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-03 10:23:03.150989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:34.818 [2024-11-03 10:23:03.150997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:28:34.818 [2024-11-03 10:23:03.151007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-03 10:23:03.151107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-03 10:23:03.151119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:34.818 [2024-11-03 10:23:03.151129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:28:34.818 [2024-11-03 10:23:03.151137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-03 10:23:03.157859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.818 [2024-11-03 10:23:03.157895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:34.818 [2024-11-03 10:23:03.157911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.818 [2024-11-03 10:23:03.157921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-03 10:23:03.157977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.818 [2024-11-03 10:23:03.157992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:34.818 [2024-11-03 10:23:03.158000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.818 [2024-11-03 10:23:03.158009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-03 10:23:03.158075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.818 [2024-11-03 10:23:03.158090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:34.818 [2024-11-03 10:23:03.158099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.818 [2024-11-03 10:23:03.158108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-03 10:23:03.158125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.818 [2024-11-03 10:23:03.158138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:34.818 [2024-11-03 10:23:03.158146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.818 [2024-11-03 10:23:03.158156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-03 10:23:03.170016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.818 [2024-11-03 10:23:03.170059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:34.819 [2024-11-03 10:23:03.170074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.819 [2024-11-03 10:23:03.170083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.078 [2024-11-03 10:23:03.180423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.078 [2024-11-03 10:23:03.180472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.078 [2024-11-03 10:23:03.180482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.078 [2024-11-03 10:23:03.180495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.078 [2024-11-03 10:23:03.180571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.078 [2024-11-03 10:23:03.180586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.078 [2024-11-03 10:23:03.180596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.078 [2024-11-03 10:23:03.180605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.078 [2024-11-03 10:23:03.180652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.078 [2024-11-03 10:23:03.180665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.078 [2024-11-03 10:23:03.180673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.078 [2024-11-03 10:23:03.180684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.078 [2024-11-03 10:23:03.180757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.078 [2024-11-03 10:23:03.180769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.078 [2024-11-03 10:23:03.180778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.078 [2024-11-03 10:23:03.180787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.078 [2024-11-03 10:23:03.180818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.078 [2024-11-03 10:23:03.180829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:35.078 [2024-11-03 10:23:03.180838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.078 [2024-11-03 10:23:03.180851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.078 [2024-11-03 10:23:03.180896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.078 [2024-11-03 10:23:03.180978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.078 [2024-11-03 10:23:03.180986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.078 [2024-11-03 10:23:03.180997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.078 [2024-11-03 10:23:03.181050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.078 [2024-11-03 10:23:03.181070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.078 [2024-11-03 10:23:03.181080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.078 [2024-11-03 10:23:03.181093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.078 [2024-11-03 10:23:03.181253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.056 ms, result 0 00:28:35.078 true 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92803 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92803 ']' 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92803 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92803 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:35.078 killing process with pid 92803 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92803' 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 92803 00:28:35.078 10:23:03 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 92803 00:28:43.268 10:23:10 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:45.811 262144+0 records in 00:28:45.811 262144+0 records out 00:28:45.811 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.71429 s, 289 MB/s 00:28:45.811 10:23:13 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:47.724 10:23:15 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:47.724 [2024-11-03 10:23:15.893056] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:47.724 [2024-11-03 10:23:15.893168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93006 ] 00:28:47.724 [2024-11-03 10:23:16.026891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.984 [2024-11-03 10:23:16.088094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.984 [2024-11-03 10:23:16.235579] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.984 [2024-11-03 10:23:16.235681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.245 [2024-11-03 10:23:16.399519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.245 [2024-11-03 10:23:16.399584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.245 [2024-11-03 10:23:16.399605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:48.246 [2024-11-03 10:23:16.399614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.399676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.399688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.246 [2024-11-03 10:23:16.399697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:48.246 [2024-11-03 10:23:16.399712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.399735] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.246 [2024-11-03 10:23:16.400011] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.246 [2024-11-03 10:23:16.400077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.400087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.246 [2024-11-03 10:23:16.400100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:28:48.246 [2024-11-03 10:23:16.400109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.402411] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:48.246 [2024-11-03 10:23:16.407036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.407089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:48.246 [2024-11-03 10:23:16.407101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:28:48.246 [2024-11-03 10:23:16.407118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.407199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.407213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:48.246 [2024-11-03 10:23:16.407246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:48.246 [2024-11-03 10:23:16.407255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.418662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.418710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:48.246 [2024-11-03 10:23:16.418723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.358 ms 00:28:48.246 [2024-11-03 10:23:16.418732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.418839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.418850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:48.246 [2024-11-03 10:23:16.418859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:48.246 [2024-11-03 10:23:16.418867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.418930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.418944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:48.246 [2024-11-03 10:23:16.418954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:48.246 [2024-11-03 10:23:16.418962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.418985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.246 [2024-11-03 10:23:16.421680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.421718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:48.246 [2024-11-03 10:23:16.421729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.701 ms 00:28:48.246 [2024-11-03 10:23:16.421737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.421788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.421797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:48.246 [2024-11-03 10:23:16.421806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:48.246 [2024-11-03 10:23:16.421814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.421838] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:48.246 [2024-11-03 10:23:16.421868] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:48.246 [2024-11-03 10:23:16.421916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:48.246 [2024-11-03 10:23:16.421935] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:48.246 [2024-11-03 10:23:16.422057] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:48.246 [2024-11-03 10:23:16.422070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:48.246 [2024-11-03 10:23:16.422082] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:48.246 [2024-11-03 10:23:16.422093] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422122] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:48.246 [2024-11-03 10:23:16.422130] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:48.246 [2024-11-03 10:23:16.422139] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:48.246 [2024-11-03 10:23:16.422147] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:48.246 [2024-11-03 10:23:16.422156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.422164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:48.246 [2024-11-03 10:23:16.422174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:28:48.246 [2024-11-03 10:23:16.422187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.422288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.246 [2024-11-03 10:23:16.422299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:48.246 [2024-11-03 10:23:16.422310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:28:48.246 [2024-11-03 10:23:16.422319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.246 [2024-11-03 10:23:16.422427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:48.246 [2024-11-03 10:23:16.422442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:48.246 [2024-11-03 10:23:16.422453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:48.246 [2024-11-03 10:23:16.422489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:48.246 [2024-11-03 10:23:16.422515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.246 [2024-11-03 10:23:16.422533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:48.246 [2024-11-03 10:23:16.422541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:48.246 [2024-11-03 10:23:16.422552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.246 [2024-11-03 10:23:16.422560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:48.246 [2024-11-03 10:23:16.422570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:48.246 [2024-11-03 10:23:16.422579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:48.246 [2024-11-03 10:23:16.422597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:48.246 [2024-11-03 10:23:16.422620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:48.246 [2024-11-03 10:23:16.422645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:48.246 [2024-11-03 10:23:16.422668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:48.246 [2024-11-03 10:23:16.422702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.246 [2024-11-03 10:23:16.422716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:48.246 [2024-11-03 10:23:16.422723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:48.246 [2024-11-03 10:23:16.422730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.246 [2024-11-03 10:23:16.422738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:48.247 [2024-11-03 10:23:16.422744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:48.247 [2024-11-03 10:23:16.422751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.247 [2024-11-03 10:23:16.422759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:48.247 [2024-11-03 10:23:16.422766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:48.247 [2024-11-03 10:23:16.422773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.247 [2024-11-03 10:23:16.422780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:48.247 [2024-11-03 10:23:16.422789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:48.247 [2024-11-03 10:23:16.422796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.247 [2024-11-03 10:23:16.422805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:48.247 [2024-11-03 10:23:16.422818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:48.247 [2024-11-03 10:23:16.422826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.247 [2024-11-03 10:23:16.422838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.247 [2024-11-03 10:23:16.422851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:48.247 [2024-11-03 10:23:16.422859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:48.247 [2024-11-03 10:23:16.422869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:48.247 [2024-11-03 10:23:16.422876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:48.247 [2024-11-03 10:23:16.422883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:48.247 [2024-11-03 10:23:16.422892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:48.247 [2024-11-03 10:23:16.422902] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:48.247 [2024-11-03 10:23:16.422913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.247 [2024-11-03 10:23:16.422927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:48.247 [2024-11-03 10:23:16.422934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:48.247 [2024-11-03 10:23:16.422942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:48.247 [2024-11-03 10:23:16.422949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:48.247 [2024-11-03 10:23:16.422957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:48.247 [2024-11-03 10:23:16.422966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:48.247 [2024-11-03 10:23:16.422974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:48.247 [2024-11-03 10:23:16.422982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:48.247 [2024-11-03 10:23:16.422989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:48.247 [2024-11-03 10:23:16.423002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:48.247 [2024-11-03 10:23:16.423009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:48.247 [2024-11-03 10:23:16.423016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:48.247 [2024-11-03 10:23:16.423025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:48.247 [2024-11-03 10:23:16.423034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:48.247 [2024-11-03 10:23:16.423042] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:48.247 [2024-11-03 10:23:16.423053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.247 [2024-11-03 10:23:16.423061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:48.247 [2024-11-03 10:23:16.423069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:48.247 [2024-11-03 10:23:16.423080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:48.247 [2024-11-03 10:23:16.423089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:48.247 [2024-11-03 10:23:16.423098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.423112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:48.247 [2024-11-03 10:23:16.423120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:28:48.247 [2024-11-03 10:23:16.423135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.451063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.451124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:48.247 [2024-11-03 10:23:16.451140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.859 ms 00:28:48.247 [2024-11-03 10:23:16.451150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.451272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.451283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:48.247 [2024-11-03 10:23:16.451294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:48.247 [2024-11-03 10:23:16.451303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.467159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.467463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:48.247 [2024-11-03 10:23:16.467483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.786 ms 00:28:48.247 [2024-11-03 10:23:16.467494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.467536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.467545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:48.247 [2024-11-03 10:23:16.467554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:48.247 [2024-11-03 10:23:16.467563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.468340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.468372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:48.247 [2024-11-03 10:23:16.468384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:28:48.247 [2024-11-03 10:23:16.468399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.468562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.468574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:48.247 [2024-11-03 10:23:16.468583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:28:48.247 [2024-11-03 10:23:16.468592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.477898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.477944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:48.247 [2024-11-03 10:23:16.477962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.280 ms 00:28:48.247 [2024-11-03 10:23:16.477971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.482828] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:48.247 [2024-11-03 10:23:16.483028] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:48.247 [2024-11-03 10:23:16.483048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.483059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:48.247 [2024-11-03 10:23:16.483068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.960 ms 00:28:48.247 [2024-11-03 10:23:16.483078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.499590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.499643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:48.247 [2024-11-03 10:23:16.499657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.467 ms 00:28:48.247 [2024-11-03 10:23:16.499669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.502674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.502721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:48.247 [2024-11-03 10:23:16.502732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:28:48.247 [2024-11-03 10:23:16.502742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.505125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.505175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.247 [2024-11-03 10:23:16.505185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:28:48.247 [2024-11-03 10:23:16.505193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.505570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.505595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.247 [2024-11-03 10:23:16.505606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:28:48.247 [2024-11-03 10:23:16.505615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.535080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.535140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.247 [2024-11-03 10:23:16.535156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.445 ms 00:28:48.247 [2024-11-03 10:23:16.535167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.543826] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.247 [2024-11-03 10:23:16.547439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.247 [2024-11-03 10:23:16.547482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.247 [2024-11-03 10:23:16.547495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.223 ms 00:28:48.247 [2024-11-03 10:23:16.547516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.247 [2024-11-03 10:23:16.547592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.248 [2024-11-03 10:23:16.547605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.248 [2024-11-03 10:23:16.547615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:48.248 [2024-11-03 10:23:16.547624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.248 [2024-11-03 10:23:16.547705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.248 [2024-11-03 10:23:16.547717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.248 [2024-11-03 10:23:16.547727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:48.248 [2024-11-03 10:23:16.547736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.248 [2024-11-03 10:23:16.547775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.248 [2024-11-03 10:23:16.547794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.248 [2024-11-03 10:23:16.547804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:48.248 [2024-11-03 10:23:16.547812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.248 [2024-11-03 10:23:16.547858] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.248 [2024-11-03 10:23:16.547872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.248 [2024-11-03 10:23:16.547881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.248 [2024-11-03 10:23:16.547894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:48.248 [2024-11-03 10:23:16.547903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.248 [2024-11-03 10:23:16.553971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.248 [2024-11-03 10:23:16.554206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.248 [2024-11-03 10:23:16.554246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.048 ms 00:28:48.248 [2024-11-03 10:23:16.554258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.248 [2024-11-03 10:23:16.554346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.248 [2024-11-03 10:23:16.554358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.248 [2024-11-03 10:23:16.554368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:48.248 [2024-11-03 10:23:16.554380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.248 [2024-11-03 10:23:16.556175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.097 ms, result 0 00:28:49.635  [2024-11-03T10:23:18.571Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-03T10:23:19.951Z] Copying: 33/1024 [MB] (13 MBps) [2024-11-03T10:23:20.892Z] Copying: 55/1024 [MB] (22 MBps) [2024-11-03T10:23:21.826Z] Copying: 71/1024 [MB] (15 MBps) [2024-11-03T10:23:22.760Z] Copying: 96/1024 [MB] (25 MBps) [2024-11-03T10:23:23.700Z] Copying: 119/1024 [MB] (22 MBps) [2024-11-03T10:23:24.641Z] Copying: 137/1024 [MB] (18 MBps) [2024-11-03T10:23:25.578Z] Copying: 151/1024 [MB] (13 MBps) [2024-11-03T10:23:26.962Z] Copying: 164/1024 [MB] (13 MBps) [2024-11-03T10:23:27.901Z] Copying: 174/1024 [MB] (10 MBps) [2024-11-03T10:23:28.838Z] Copying: 188/1024 [MB] (13 MBps) [2024-11-03T10:23:29.770Z] Copying: 199/1024 [MB] (10 MBps) [2024-11-03T10:23:30.703Z] Copying: 213/1024 [MB] (14 MBps) [2024-11-03T10:23:31.635Z] Copying: 228/1024 [MB] (14 MBps) [2024-11-03T10:23:32.599Z] Copying: 242/1024 [MB] (14 MBps) [2024-11-03T10:23:33.969Z] Copying: 257/1024 [MB] (14 MBps) [2024-11-03T10:23:34.907Z] Copying: 272/1024 [MB] (14 MBps) [2024-11-03T10:23:35.845Z] Copying: 287/1024 [MB] (14 MBps) [2024-11-03T10:23:36.779Z] Copying: 300/1024 [MB] (12 MBps) [2024-11-03T10:23:37.719Z] Copying: 315/1024 [MB] (15 MBps) [2024-11-03T10:23:38.653Z] Copying: 329/1024 [MB] (13 MBps) [2024-11-03T10:23:39.586Z] Copying: 342/1024 [MB] (13 MBps) [2024-11-03T10:23:40.963Z] Copying: 357/1024 [MB] (14 MBps) [2024-11-03T10:23:41.896Z] Copying: 372/1024 [MB] (14 MBps) [2024-11-03T10:23:42.836Z] Copying: 388/1024 [MB] (15 MBps) [2024-11-03T10:23:43.768Z] Copying: 400/1024 [MB] (12 MBps) [2024-11-03T10:23:44.702Z] Copying: 414/1024 [MB] (14 MBps) [2024-11-03T10:23:45.638Z] Copying: 429/1024 [MB] (14 MBps) [2024-11-03T10:23:46.582Z] Copying: 444/1024 [MB] (14 MBps) [2024-11-03T10:23:47.967Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-03T10:23:48.928Z] Copying: 469/1024 [MB] (15 MBps) [2024-11-03T10:23:49.871Z] Copying: 479/1024 [MB] (10 MBps) [2024-11-03T10:23:50.805Z] Copying: 493/1024 [MB] (13 MBps) [2024-11-03T10:23:51.738Z] Copying: 507/1024 [MB] (14 MBps) [2024-11-03T10:23:52.703Z] Copying: 523/1024 [MB] (15 MBps) [2024-11-03T10:23:53.637Z] Copying: 537/1024 [MB] (14 MBps) [2024-11-03T10:23:54.571Z] Copying: 552/1024 [MB] (14 MBps) [2024-11-03T10:23:55.955Z] Copying: 566/1024 [MB] (14 MBps) [2024-11-03T10:23:56.890Z] Copying: 578/1024 [MB] (12 MBps) [2024-11-03T10:23:57.823Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-03T10:23:58.756Z] Copying: 605/1024 [MB] (14 MBps) [2024-11-03T10:23:59.690Z] Copying: 619/1024 [MB] (14 MBps) [2024-11-03T10:24:00.621Z] Copying: 634/1024 [MB] (14 MBps) [2024-11-03T10:24:01.995Z] Copying: 648/1024 [MB] (13 MBps) [2024-11-03T10:24:02.929Z] Copying: 662/1024 [MB] (13 MBps) [2024-11-03T10:24:03.862Z] Copying: 677/1024 [MB] (14 MBps) [2024-11-03T10:24:04.794Z] Copying: 691/1024 [MB] (14 MBps) [2024-11-03T10:24:05.727Z] Copying: 705/1024 [MB] (14 MBps) [2024-11-03T10:24:06.659Z] Copying: 719/1024 [MB] (13 MBps) [2024-11-03T10:24:07.592Z] Copying: 733/1024 [MB] (14 MBps) [2024-11-03T10:24:08.967Z] Copying: 748/1024 [MB] (14 MBps) [2024-11-03T10:24:09.902Z] Copying: 761/1024 [MB] (13 MBps) [2024-11-03T10:24:10.845Z] Copying: 775/1024 [MB] (13 MBps) [2024-11-03T10:24:11.790Z] Copying: 786/1024 [MB] (10 MBps) [2024-11-03T10:24:12.771Z] Copying: 806/1024 [MB] (19 MBps) [2024-11-03T10:24:13.713Z] Copying: 821/1024 [MB] (14 MBps) [2024-11-03T10:24:14.654Z] Copying: 839/1024 [MB] (17 MBps) [2024-11-03T10:24:15.596Z] Copying: 859/1024 [MB] (20 MBps) [2024-11-03T10:24:16.982Z] Copying: 876/1024 [MB] (16 MBps) [2024-11-03T10:24:17.926Z] Copying: 894/1024 [MB] (17 MBps) [2024-11-03T10:24:18.869Z] Copying: 904/1024 [MB] (10 MBps) [2024-11-03T10:24:19.813Z] Copying: 919/1024 [MB] (15 MBps) [2024-11-03T10:24:20.755Z] Copying: 934/1024 [MB] (14 MBps) [2024-11-03T10:24:21.697Z] Copying: 948/1024 [MB] (14 MBps) [2024-11-03T10:24:22.640Z] Copying: 964/1024 [MB] (16 MBps) [2024-11-03T10:24:23.582Z] Copying: 980/1024 [MB] (16 MBps) [2024-11-03T10:24:24.967Z] Copying: 1000/1024 [MB] (20 MBps) [2024-11-03T10:24:25.541Z] Copying: 1013/1024 [MB] (12 MBps) [2024-11-03T10:24:25.541Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 10:24:25.237186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.179 [2024-11-03 10:24:25.237404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:57.179 [2024-11-03 10:24:25.237481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:57.179 [2024-11-03 10:24:25.237507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.179 [2024-11-03 10:24:25.237552] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:57.179 [2024-11-03 10:24:25.238396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.179 [2024-11-03 10:24:25.238555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:57.179 [2024-11-03 10:24:25.238622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:29:57.179 [2024-11-03 10:24:25.238659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.179 [2024-11-03 10:24:25.240884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.179 [2024-11-03 10:24:25.241040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:57.179 [2024-11-03 10:24:25.241118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:29:57.179 [2024-11-03 10:24:25.241142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.179 [2024-11-03 10:24:25.241193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.179 [2024-11-03 10:24:25.241238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:57.179 [2024-11-03 10:24:25.241309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:57.179 [2024-11-03 10:24:25.241332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.179 [2024-11-03 10:24:25.241404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.179 [2024-11-03 10:24:25.241426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:57.179 [2024-11-03 10:24:25.241491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:57.179 [2024-11-03 10:24:25.241513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.179 [2024-11-03 10:24:25.241542] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:57.179 [2024-11-03 10:24:25.241567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.241639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.241819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.241850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.241879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.241907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.241973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.242004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.242032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.242061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:57.179 [2024-11-03 10:24:25.242124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.242953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.243994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:57.180 [2024-11-03 10:24:25.244307] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:57.180 [2024-11-03 10:24:25.244316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6 00:29:57.181 [2024-11-03 10:24:25.244334] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:57.181 [2024-11-03 10:24:25.244342] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:57.181 [2024-11-03 10:24:25.244349] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:57.181 [2024-11-03 10:24:25.244358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:57.181 [2024-11-03 10:24:25.244366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:57.181 [2024-11-03 10:24:25.244374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:57.181 [2024-11-03 10:24:25.244382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:57.181 [2024-11-03 10:24:25.244388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:57.181 [2024-11-03 10:24:25.244395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:57.181 [2024-11-03 10:24:25.244403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.181 [2024-11-03 10:24:25.244411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:57.181 [2024-11-03 10:24:25.244422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:29:57.181 [2024-11-03 10:24:25.244430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.246940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.181 [2024-11-03 10:24:25.247089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:57.181 [2024-11-03 10:24:25.247146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.484 ms 00:29:57.181 [2024-11-03 10:24:25.247171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.247365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.181 [2024-11-03 10:24:25.247407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:57.181 [2024-11-03 10:24:25.247431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:29:57.181 [2024-11-03 10:24:25.247456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.254424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.254588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:57.181 [2024-11-03 10:24:25.254653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.254675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.254759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.254780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.181 [2024-11-03 10:24:25.254799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.254822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.254868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.254935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.181 [2024-11-03 10:24:25.254958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.254978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.255008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.255029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.181 [2024-11-03 10:24:25.255047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.255066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.268383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.268564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.181 [2024-11-03 10:24:25.268618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.268641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.278457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.278612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.181 [2024-11-03 10:24:25.278664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.278686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.278756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.278779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.181 [2024-11-03 10:24:25.278798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.278818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.278870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.278893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.181 [2024-11-03 10:24:25.278914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.278966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.279050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.279074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.181 [2024-11-03 10:24:25.279093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.279112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.279151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.279173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:57.181 [2024-11-03 10:24:25.279192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.279266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.279314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.279332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.181 [2024-11-03 10:24:25.279341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.279349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.279394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.181 [2024-11-03 10:24:25.279404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.181 [2024-11-03 10:24:25.279413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.181 [2024-11-03 10:24:25.279421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.181 [2024-11-03 10:24:25.279544] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.329 ms, result 0 00:29:57.442 00:29:57.442 00:29:57.442 10:24:25 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:57.701 [2024-11-03 10:24:25.819648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:57.701 [2024-11-03 10:24:25.819985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93720 ] 00:29:57.701 [2024-11-03 10:24:25.957031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.701 [2024-11-03 10:24:26.008521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.963 [2024-11-03 10:24:26.122209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:57.963 [2024-11-03 10:24:26.122309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:57.963 [2024-11-03 10:24:26.282425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.282484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:57.963 [2024-11-03 10:24:26.282505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:57.963 [2024-11-03 10:24:26.282518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.282573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.282584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.963 [2024-11-03 10:24:26.282594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:57.963 [2024-11-03 10:24:26.282607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.282628] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:57.963 [2024-11-03 10:24:26.282902] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:57.963 [2024-11-03 10:24:26.282920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.282932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.963 [2024-11-03 10:24:26.282946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:29:57.963 [2024-11-03 10:24:26.282953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.283252] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:57.963 [2024-11-03 10:24:26.283279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.283289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:57.963 [2024-11-03 10:24:26.283298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:57.963 [2024-11-03 10:24:26.283306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.283359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.283395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:57.963 [2024-11-03 10:24:26.283407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:57.963 [2024-11-03 10:24:26.283415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.283698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.283723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.963 [2024-11-03 10:24:26.283733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:29:57.963 [2024-11-03 10:24:26.283741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.283826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.283844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.963 [2024-11-03 10:24:26.283854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:57.963 [2024-11-03 10:24:26.283865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.283889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.283898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:57.963 [2024-11-03 10:24:26.283906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:57.963 [2024-11-03 10:24:26.283914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.283934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:57.963 [2024-11-03 10:24:26.286030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.286079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.963 [2024-11-03 10:24:26.286091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:29:57.963 [2024-11-03 10:24:26.286099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.286133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.286142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:57.963 [2024-11-03 10:24:26.286150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:57.963 [2024-11-03 10:24:26.286157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.286191] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:57.963 [2024-11-03 10:24:26.286211] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:57.963 [2024-11-03 10:24:26.286280] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:57.963 [2024-11-03 10:24:26.286303] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:57.963 [2024-11-03 10:24:26.286409] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:57.963 [2024-11-03 10:24:26.286419] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:57.963 [2024-11-03 10:24:26.286431] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:57.963 [2024-11-03 10:24:26.286445] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:57.963 [2024-11-03 10:24:26.286455] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:57.963 [2024-11-03 10:24:26.286465] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:57.963 [2024-11-03 10:24:26.286475] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:57.963 [2024-11-03 10:24:26.286483] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:57.963 [2024-11-03 10:24:26.286491] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:57.963 [2024-11-03 10:24:26.286499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.286506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:57.963 [2024-11-03 10:24:26.286514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:29:57.963 [2024-11-03 10:24:26.286522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.286622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.963 [2024-11-03 10:24:26.286637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:57.963 [2024-11-03 10:24:26.286645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:57.963 [2024-11-03 10:24:26.286658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.963 [2024-11-03 10:24:26.286762] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:57.963 [2024-11-03 10:24:26.286775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:57.963 [2024-11-03 10:24:26.286791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.963 [2024-11-03 10:24:26.286803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.963 [2024-11-03 10:24:26.286812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:57.963 [2024-11-03 10:24:26.286820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:57.963 [2024-11-03 10:24:26.286828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:57.963 [2024-11-03 10:24:26.286837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:57.963 [2024-11-03 10:24:26.286846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:57.963 [2024-11-03 10:24:26.286855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.963 [2024-11-03 10:24:26.286863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:57.963 [2024-11-03 10:24:26.286872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:57.963 [2024-11-03 10:24:26.286880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.963 [2024-11-03 10:24:26.286888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:57.963 [2024-11-03 10:24:26.286896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:57.963 [2024-11-03 10:24:26.286904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.963 [2024-11-03 10:24:26.286912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:57.963 [2024-11-03 10:24:26.286920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:57.963 [2024-11-03 10:24:26.286928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.963 [2024-11-03 10:24:26.286938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:57.963 [2024-11-03 10:24:26.286946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:57.963 [2024-11-03 10:24:26.286954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.963 [2024-11-03 10:24:26.286962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:57.963 [2024-11-03 10:24:26.286970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:57.963 [2024-11-03 10:24:26.286978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.963 [2024-11-03 10:24:26.286985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:57.963 [2024-11-03 10:24:26.286993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:57.963 [2024-11-03 10:24:26.287001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.963 [2024-11-03 10:24:26.287009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:57.963 [2024-11-03 10:24:26.287017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:57.963 [2024-11-03 10:24:26.287025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.963 [2024-11-03 10:24:26.287032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:57.963 [2024-11-03 10:24:26.287039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:57.963 [2024-11-03 10:24:26.287047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.963 [2024-11-03 10:24:26.287054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:57.964 [2024-11-03 10:24:26.287066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:57.964 [2024-11-03 10:24:26.287074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.964 [2024-11-03 10:24:26.287081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:57.964 [2024-11-03 10:24:26.287089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:57.964 [2024-11-03 10:24:26.287097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.964 [2024-11-03 10:24:26.287105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:57.964 [2024-11-03 10:24:26.287112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:57.964 [2024-11-03 10:24:26.287120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.964 [2024-11-03 10:24:26.287127] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:57.964 [2024-11-03 10:24:26.287138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:57.964 [2024-11-03 10:24:26.287150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.964 [2024-11-03 10:24:26.287159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.964 [2024-11-03 10:24:26.287167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:57.964 [2024-11-03 10:24:26.287175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:57.964 [2024-11-03 10:24:26.287183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:57.964 [2024-11-03 10:24:26.287191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:57.964 [2024-11-03 10:24:26.287202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:57.964 [2024-11-03 10:24:26.287210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:57.964 [2024-11-03 10:24:26.287219] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:57.964 [2024-11-03 10:24:26.287251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.964 [2024-11-03 10:24:26.287264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:57.964 [2024-11-03 10:24:26.287272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:57.964 [2024-11-03 10:24:26.287279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:57.964 [2024-11-03 10:24:26.287286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:57.964 [2024-11-03 10:24:26.287294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:57.964 [2024-11-03 10:24:26.287302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:57.964 [2024-11-03 10:24:26.287309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:57.964 [2024-11-03 10:24:26.287318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:57.964 [2024-11-03 10:24:26.287325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:57.964 [2024-11-03 10:24:26.287333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:57.964 [2024-11-03 10:24:26.287340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:57.964 [2024-11-03 10:24:26.287354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:57.964 [2024-11-03 10:24:26.287365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:57.964 [2024-11-03 10:24:26.287373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:57.964 [2024-11-03 10:24:26.287379] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:57.964 [2024-11-03 10:24:26.287388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.964 [2024-11-03 10:24:26.287397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:57.964 [2024-11-03 10:24:26.287405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:57.964 [2024-11-03 10:24:26.287412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:57.964 [2024-11-03 10:24:26.287419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:57.964 [2024-11-03 10:24:26.287427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.964 [2024-11-03 10:24:26.287435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:57.964 [2024-11-03 10:24:26.287443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:29:57.964 [2024-11-03 10:24:26.287451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.964 [2024-11-03 10:24:26.305445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.964 [2024-11-03 10:24:26.305636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.964 [2024-11-03 10:24:26.305661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.951 ms 00:29:57.964 [2024-11-03 10:24:26.305670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.964 [2024-11-03 10:24:26.305763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.964 [2024-11-03 10:24:26.305772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:57.964 [2024-11-03 10:24:26.305781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:57.964 [2024-11-03 10:24:26.305789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.964 [2024-11-03 10:24:26.317907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.964 [2024-11-03 10:24:26.317960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.964 [2024-11-03 10:24:26.317976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.053 ms 00:29:57.964 [2024-11-03 10:24:26.317986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.964 [2024-11-03 10:24:26.318023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.964 [2024-11-03 10:24:26.318033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.964 [2024-11-03 10:24:26.318044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:57.964 [2024-11-03 10:24:26.318053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.964 [2024-11-03 10:24:26.318166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.964 [2024-11-03 10:24:26.318179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.964 [2024-11-03 10:24:26.318189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:57.964 [2024-11-03 10:24:26.318202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.964 [2024-11-03 10:24:26.318377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.964 [2024-11-03 10:24:26.318390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.964 [2024-11-03 10:24:26.318401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:29:57.964 [2024-11-03 10:24:26.318413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.225 [2024-11-03 10:24:26.325330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.225 [2024-11-03 10:24:26.325371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:58.225 [2024-11-03 10:24:26.325381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.893 ms 00:29:58.225 [2024-11-03 10:24:26.325388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.225 [2024-11-03 10:24:26.325501] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:58.225 [2024-11-03 10:24:26.325514] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:58.225 [2024-11-03 10:24:26.325528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.225 [2024-11-03 10:24:26.325536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:58.225 [2024-11-03 10:24:26.325545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:58.225 [2024-11-03 10:24:26.325557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.225 [2024-11-03 10:24:26.338423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.225 [2024-11-03 10:24:26.338466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:58.225 [2024-11-03 10:24:26.338477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.850 ms 00:29:58.225 [2024-11-03 10:24:26.338485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.225 [2024-11-03 10:24:26.338614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.225 [2024-11-03 10:24:26.338623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:58.225 [2024-11-03 10:24:26.338632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:58.225 [2024-11-03 10:24:26.338646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.225 [2024-11-03 10:24:26.338691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.338704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:58.226 [2024-11-03 10:24:26.338712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:58.226 [2024-11-03 10:24:26.338722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.339032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.339043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:58.226 [2024-11-03 10:24:26.339051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:29:58.226 [2024-11-03 10:24:26.339059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.339077] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:58.226 [2024-11-03 10:24:26.339086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.339094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:58.226 [2024-11-03 10:24:26.339103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:58.226 [2024-11-03 10:24:26.339113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.348273] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:58.226 [2024-11-03 10:24:26.348561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.348583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:58.226 [2024-11-03 10:24:26.348593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.430 ms 00:29:58.226 [2024-11-03 10:24:26.348601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.351022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.351055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:58.226 [2024-11-03 10:24:26.351065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:29:58.226 [2024-11-03 10:24:26.351072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.351174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.351185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:58.226 [2024-11-03 10:24:26.351194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:58.226 [2024-11-03 10:24:26.351201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.351248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.351261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:58.226 [2024-11-03 10:24:26.351269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:58.226 [2024-11-03 10:24:26.351276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.351312] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:58.226 [2024-11-03 10:24:26.351322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.351332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:58.226 [2024-11-03 10:24:26.351340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:58.226 [2024-11-03 10:24:26.351351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.357350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.357401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:58.226 [2024-11-03 10:24:26.357417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.977 ms 00:29:58.226 [2024-11-03 10:24:26.357431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.357518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.226 [2024-11-03 10:24:26.357528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:58.226 [2024-11-03 10:24:26.357537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:58.226 [2024-11-03 10:24:26.357545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.226 [2024-11-03 10:24:26.358632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 75.751 ms, result 0 00:29:59.611  [2024-11-03T10:24:28.546Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-03T10:24:29.931Z] Copying: 28/1024 [MB] (12 MBps) [2024-11-03T10:24:30.874Z] Copying: 51/1024 [MB] (23 MBps) [2024-11-03T10:24:31.817Z] Copying: 68/1024 [MB] (16 MBps) [2024-11-03T10:24:32.814Z] Copying: 85/1024 [MB] (16 MBps) [2024-11-03T10:24:33.758Z] Copying: 100/1024 [MB] (15 MBps) [2024-11-03T10:24:34.700Z] Copying: 118/1024 [MB] (18 MBps) [2024-11-03T10:24:35.641Z] Copying: 134/1024 [MB] (16 MBps) [2024-11-03T10:24:36.585Z] Copying: 153/1024 [MB] (18 MBps) [2024-11-03T10:24:37.973Z] Copying: 171/1024 [MB] (18 MBps) [2024-11-03T10:24:38.546Z] Copying: 187/1024 [MB] (16 MBps) [2024-11-03T10:24:39.934Z] Copying: 206/1024 [MB] (18 MBps) [2024-11-03T10:24:40.876Z] Copying: 227/1024 [MB] (21 MBps) [2024-11-03T10:24:41.820Z] Copying: 249/1024 [MB] (21 MBps) [2024-11-03T10:24:42.765Z] Copying: 268/1024 [MB] (18 MBps) [2024-11-03T10:24:43.709Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-03T10:24:44.655Z] Copying: 303/1024 [MB] (25 MBps) [2024-11-03T10:24:45.597Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-03T10:24:46.542Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-03T10:24:47.930Z] Copying: 340/1024 [MB] (14 MBps) [2024-11-03T10:24:48.876Z] Copying: 353/1024 [MB] (12 MBps) [2024-11-03T10:24:49.820Z] Copying: 371/1024 [MB] (18 MBps) [2024-11-03T10:24:50.762Z] Copying: 386/1024 [MB] (14 MBps) [2024-11-03T10:24:51.707Z] Copying: 398/1024 [MB] (12 MBps) [2024-11-03T10:24:52.690Z] Copying: 409/1024 [MB] (10 MBps) [2024-11-03T10:24:53.642Z] Copying: 424/1024 [MB] (14 MBps) [2024-11-03T10:24:54.586Z] Copying: 452/1024 [MB] (28 MBps) [2024-11-03T10:24:55.974Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-03T10:24:56.546Z] Copying: 481/1024 [MB] (18 MBps) [2024-11-03T10:24:57.934Z] Copying: 505/1024 [MB] (24 MBps) [2024-11-03T10:24:58.877Z] Copying: 521/1024 [MB] (15 MBps) [2024-11-03T10:24:59.819Z] Copying: 536/1024 [MB] (14 MBps) [2024-11-03T10:25:00.762Z] Copying: 556/1024 [MB] (20 MBps) [2024-11-03T10:25:01.707Z] Copying: 571/1024 [MB] (14 MBps) [2024-11-03T10:25:02.652Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-03T10:25:03.594Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-03T10:25:04.981Z] Copying: 603/1024 [MB] (10 MBps) [2024-11-03T10:25:05.553Z] Copying: 620/1024 [MB] (16 MBps) [2024-11-03T10:25:06.941Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-03T10:25:07.885Z] Copying: 641/1024 [MB] (10 MBps) [2024-11-03T10:25:08.840Z] Copying: 657/1024 [MB] (15 MBps) [2024-11-03T10:25:09.825Z] Copying: 670/1024 [MB] (13 MBps) [2024-11-03T10:25:10.770Z] Copying: 681/1024 [MB] (10 MBps) [2024-11-03T10:25:11.721Z] Copying: 692/1024 [MB] (10 MBps) [2024-11-03T10:25:12.665Z] Copying: 702/1024 [MB] (10 MBps) [2024-11-03T10:25:13.608Z] Copying: 716/1024 [MB] (13 MBps) [2024-11-03T10:25:14.554Z] Copying: 729/1024 [MB] (13 MBps) [2024-11-03T10:25:15.942Z] Copying: 743/1024 [MB] (13 MBps) [2024-11-03T10:25:16.886Z] Copying: 757/1024 [MB] (14 MBps) [2024-11-03T10:25:17.829Z] Copying: 775/1024 [MB] (17 MBps) [2024-11-03T10:25:18.771Z] Copying: 796/1024 [MB] (21 MBps) [2024-11-03T10:25:19.713Z] Copying: 814/1024 [MB] (18 MBps) [2024-11-03T10:25:20.658Z] Copying: 832/1024 [MB] (18 MBps) [2024-11-03T10:25:21.601Z] Copying: 850/1024 [MB] (18 MBps) [2024-11-03T10:25:22.547Z] Copying: 863/1024 [MB] (12 MBps) [2024-11-03T10:25:23.935Z] Copying: 873/1024 [MB] (10 MBps) [2024-11-03T10:25:24.878Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-03T10:25:25.822Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-03T10:25:26.778Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-03T10:25:27.742Z] Copying: 921/1024 [MB] (14 MBps) [2024-11-03T10:25:28.685Z] Copying: 936/1024 [MB] (14 MBps) [2024-11-03T10:25:29.623Z] Copying: 955/1024 [MB] (19 MBps) [2024-11-03T10:25:30.560Z] Copying: 972/1024 [MB] (17 MBps) [2024-11-03T10:25:31.944Z] Copying: 990/1024 [MB] (18 MBps) [2024-11-03T10:25:32.885Z] Copying: 1004/1024 [MB] (14 MBps) [2024-11-03T10:25:33.457Z] Copying: 1015/1024 [MB] (10 MBps) [2024-11-03T10:25:33.717Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-03 10:25:33.681533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.355 [2024-11-03 10:25:33.682062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:05.355 [2024-11-03 10:25:33.682312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:05.355 [2024-11-03 10:25:33.682378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.355 [2024-11-03 10:25:33.682555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:05.355 [2024-11-03 10:25:33.683945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.355 [2024-11-03 10:25:33.684199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:05.355 [2024-11-03 10:25:33.684380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:31:05.355 [2024-11-03 10:25:33.684406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.355 [2024-11-03 10:25:33.684920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.355 [2024-11-03 10:25:33.684967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:05.355 [2024-11-03 10:25:33.684991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:31:05.355 [2024-11-03 10:25:33.685010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.355 [2024-11-03 10:25:33.685084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.355 [2024-11-03 10:25:33.685108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:05.355 [2024-11-03 10:25:33.685137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:05.355 [2024-11-03 10:25:33.685463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.355 [2024-11-03 10:25:33.685585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.355 [2024-11-03 10:25:33.685606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:05.355 [2024-11-03 10:25:33.685627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:05.355 [2024-11-03 10:25:33.685645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.355 [2024-11-03 10:25:33.685676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:05.355 [2024-11-03 10:25:33.685714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.685992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.686990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:05.356 [2024-11-03 10:25:33.687451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:05.357 [2024-11-03 10:25:33.687670] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:05.357 [2024-11-03 10:25:33.687688] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6 00:31:05.357 [2024-11-03 10:25:33.687714] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:05.357 [2024-11-03 10:25:33.687732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:05.357 [2024-11-03 10:25:33.687749] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:05.357 [2024-11-03 10:25:33.687769] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:05.357 [2024-11-03 10:25:33.687786] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:05.357 [2024-11-03 10:25:33.687811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:05.357 [2024-11-03 10:25:33.687828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:05.357 [2024-11-03 10:25:33.687846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:05.357 [2024-11-03 10:25:33.687861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:05.357 [2024-11-03 10:25:33.687878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.357 [2024-11-03 10:25:33.687909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:05.357 [2024-11-03 10:25:33.687927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:31:05.357 [2024-11-03 10:25:33.687944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.357 [2024-11-03 10:25:33.692133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.357 [2024-11-03 10:25:33.692325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:05.357 [2024-11-03 10:25:33.692396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.157 ms 00:31:05.357 [2024-11-03 10:25:33.692433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.357 [2024-11-03 10:25:33.692614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.357 [2024-11-03 10:25:33.692644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:05.357 [2024-11-03 10:25:33.692671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:31:05.357 [2024-11-03 10:25:33.693283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.357 [2024-11-03 10:25:33.702589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.357 [2024-11-03 10:25:33.702764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:05.357 [2024-11-03 10:25:33.702830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.357 [2024-11-03 10:25:33.702853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.357 [2024-11-03 10:25:33.702943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.357 [2024-11-03 10:25:33.702970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:05.357 [2024-11-03 10:25:33.702991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.357 [2024-11-03 10:25:33.703082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.357 [2024-11-03 10:25:33.703176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.357 [2024-11-03 10:25:33.703207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:05.357 [2024-11-03 10:25:33.703249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.357 [2024-11-03 10:25:33.703373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.357 [2024-11-03 10:25:33.703401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.357 [2024-11-03 10:25:33.703411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:05.357 [2024-11-03 10:25:33.703429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.357 [2024-11-03 10:25:33.703438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.723296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.619 [2024-11-03 10:25:33.723480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:05.619 [2024-11-03 10:25:33.723500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.619 [2024-11-03 10:25:33.723510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.739725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.619 [2024-11-03 10:25:33.739780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:05.619 [2024-11-03 10:25:33.739794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.619 [2024-11-03 10:25:33.739803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.739880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.619 [2024-11-03 10:25:33.739891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:05.619 [2024-11-03 10:25:33.739901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.619 [2024-11-03 10:25:33.739910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.739951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.619 [2024-11-03 10:25:33.739970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:05.619 [2024-11-03 10:25:33.739980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.619 [2024-11-03 10:25:33.739989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.740057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.619 [2024-11-03 10:25:33.740078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:05.619 [2024-11-03 10:25:33.740093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.619 [2024-11-03 10:25:33.740103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.740130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.619 [2024-11-03 10:25:33.740141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:05.619 [2024-11-03 10:25:33.740152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.619 [2024-11-03 10:25:33.740161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.740258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.619 [2024-11-03 10:25:33.740277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:05.619 [2024-11-03 10:25:33.740291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.619 [2024-11-03 10:25:33.740299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.740365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.619 [2024-11-03 10:25:33.740379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:05.619 [2024-11-03 10:25:33.740390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.619 [2024-11-03 10:25:33.740401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.619 [2024-11-03 10:25:33.740577] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.022 ms, result 0 00:31:05.879 00:31:05.879 00:31:05.879 10:25:34 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:08.417 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:08.417 10:25:36 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:08.417 [2024-11-03 10:25:36.295113] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:08.417 [2024-11-03 10:25:36.295353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94427 ] 00:31:08.417 [2024-11-03 10:25:36.432458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.417 [2024-11-03 10:25:36.477848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.417 [2024-11-03 10:25:36.590269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:08.417 [2024-11-03 10:25:36.590513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:08.417 [2024-11-03 10:25:36.749633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.749858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:08.417 [2024-11-03 10:25:36.749956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:08.417 [2024-11-03 10:25:36.749982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.750074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.750102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:08.417 [2024-11-03 10:25:36.750123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:08.417 [2024-11-03 10:25:36.750240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.750292] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:08.417 [2024-11-03 10:25:36.750597] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:08.417 [2024-11-03 10:25:36.750648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.750670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:08.417 [2024-11-03 10:25:36.750765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:31:08.417 [2024-11-03 10:25:36.750789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.751809] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:08.417 [2024-11-03 10:25:36.751919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.752035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:08.417 [2024-11-03 10:25:36.752140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:31:08.417 [2024-11-03 10:25:36.752175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.752290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.752323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:08.417 [2024-11-03 10:25:36.752348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:08.417 [2024-11-03 10:25:36.752368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.752669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.752872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:08.417 [2024-11-03 10:25:36.752917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:31:08.417 [2024-11-03 10:25:36.752936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.753051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.753079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:08.417 [2024-11-03 10:25:36.753100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:31:08.417 [2024-11-03 10:25:36.753118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.753167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.753298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:08.417 [2024-11-03 10:25:36.753325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:08.417 [2024-11-03 10:25:36.753346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.753396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:08.417 [2024-11-03 10:25:36.755927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.756083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:08.417 [2024-11-03 10:25:36.756145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:31:08.417 [2024-11-03 10:25:36.756171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.756284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.756377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:08.417 [2024-11-03 10:25:36.756401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:08.417 [2024-11-03 10:25:36.756464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.756515] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:08.417 [2024-11-03 10:25:36.756727] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:08.417 [2024-11-03 10:25:36.756822] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:08.417 [2024-11-03 10:25:36.756893] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:08.417 [2024-11-03 10:25:36.757075] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:08.417 [2024-11-03 10:25:36.757163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:08.417 [2024-11-03 10:25:36.757221] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:08.417 [2024-11-03 10:25:36.757319] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:08.417 [2024-11-03 10:25:36.757505] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:08.417 [2024-11-03 10:25:36.757564] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:08.417 [2024-11-03 10:25:36.757675] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:08.417 [2024-11-03 10:25:36.757702] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:08.417 [2024-11-03 10:25:36.757721] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:08.417 [2024-11-03 10:25:36.757889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.757931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:08.417 [2024-11-03 10:25:36.757953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:31:08.417 [2024-11-03 10:25:36.757974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.758088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.417 [2024-11-03 10:25:36.758306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:08.417 [2024-11-03 10:25:36.758332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:08.417 [2024-11-03 10:25:36.758359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.417 [2024-11-03 10:25:36.758494] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:08.417 [2024-11-03 10:25:36.758524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:08.418 [2024-11-03 10:25:36.758603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:08.418 [2024-11-03 10:25:36.758632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.418 [2024-11-03 10:25:36.758660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:08.418 [2024-11-03 10:25:36.758680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:08.418 [2024-11-03 10:25:36.758700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:08.418 [2024-11-03 10:25:36.758749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:08.418 [2024-11-03 10:25:36.758770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:08.418 [2024-11-03 10:25:36.758790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:08.418 [2024-11-03 10:25:36.758808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:08.418 [2024-11-03 10:25:36.758826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:08.418 [2024-11-03 10:25:36.758846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:08.418 [2024-11-03 10:25:36.758864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:08.418 [2024-11-03 10:25:36.758916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:08.418 [2024-11-03 10:25:36.758939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.418 [2024-11-03 10:25:36.758959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:08.418 [2024-11-03 10:25:36.758976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:08.418 [2024-11-03 10:25:36.758996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:08.418 [2024-11-03 10:25:36.759040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:08.418 [2024-11-03 10:25:36.759108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:08.418 [2024-11-03 10:25:36.759127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:08.418 [2024-11-03 10:25:36.759201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:08.418 [2024-11-03 10:25:36.759219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:08.418 [2024-11-03 10:25:36.759296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:08.418 [2024-11-03 10:25:36.759317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:08.418 [2024-11-03 10:25:36.759354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:08.418 [2024-11-03 10:25:36.759419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:08.418 [2024-11-03 10:25:36.759460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:08.418 [2024-11-03 10:25:36.759485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:08.418 [2024-11-03 10:25:36.759494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:08.418 [2024-11-03 10:25:36.759503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:08.418 [2024-11-03 10:25:36.759510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:08.418 [2024-11-03 10:25:36.759517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:08.418 [2024-11-03 10:25:36.759531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:08.418 [2024-11-03 10:25:36.759539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759546] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:08.418 [2024-11-03 10:25:36.759561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:08.418 [2024-11-03 10:25:36.759571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:08.418 [2024-11-03 10:25:36.759582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.418 [2024-11-03 10:25:36.759592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:08.418 [2024-11-03 10:25:36.759601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:08.418 [2024-11-03 10:25:36.759610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:08.418 [2024-11-03 10:25:36.759618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:08.418 [2024-11-03 10:25:36.759631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:08.418 [2024-11-03 10:25:36.759639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:08.418 [2024-11-03 10:25:36.759648] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:08.418 [2024-11-03 10:25:36.759662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.418 [2024-11-03 10:25:36.759671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:08.418 [2024-11-03 10:25:36.759679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:08.418 [2024-11-03 10:25:36.759686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:08.418 [2024-11-03 10:25:36.759694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:08.418 [2024-11-03 10:25:36.759702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:08.418 [2024-11-03 10:25:36.759710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:08.418 [2024-11-03 10:25:36.759717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:08.418 [2024-11-03 10:25:36.759725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:08.418 [2024-11-03 10:25:36.759732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:08.418 [2024-11-03 10:25:36.759740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:08.418 [2024-11-03 10:25:36.759748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:08.418 [2024-11-03 10:25:36.759762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:08.418 [2024-11-03 10:25:36.759772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:08.418 [2024-11-03 10:25:36.759779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:08.418 [2024-11-03 10:25:36.759788] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:08.418 [2024-11-03 10:25:36.759797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.418 [2024-11-03 10:25:36.759805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:08.418 [2024-11-03 10:25:36.759813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:08.418 [2024-11-03 10:25:36.759820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:08.418 [2024-11-03 10:25:36.759827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:08.418 [2024-11-03 10:25:36.759835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.418 [2024-11-03 10:25:36.759845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:08.418 [2024-11-03 10:25:36.759854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:31:08.418 [2024-11-03 10:25:36.759863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.788271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.788373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:08.680 [2024-11-03 10:25:36.788414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.348 ms 00:31:08.680 [2024-11-03 10:25:36.788437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.788690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.788728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:08.680 [2024-11-03 10:25:36.788761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:31:08.680 [2024-11-03 10:25:36.788781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.803879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.803927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:08.680 [2024-11-03 10:25:36.803943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.948 ms 00:31:08.680 [2024-11-03 10:25:36.803951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.803988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.803998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:08.680 [2024-11-03 10:25:36.804007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:08.680 [2024-11-03 10:25:36.804016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.804117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.804129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:08.680 [2024-11-03 10:25:36.804139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:08.680 [2024-11-03 10:25:36.804151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.804340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.804356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:08.680 [2024-11-03 10:25:36.804365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:31:08.680 [2024-11-03 10:25:36.804375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.812946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.812996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:08.680 [2024-11-03 10:25:36.813007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.550 ms 00:31:08.680 [2024-11-03 10:25:36.813016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.813143] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:08.680 [2024-11-03 10:25:36.813157] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:08.680 [2024-11-03 10:25:36.813168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.813177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:08.680 [2024-11-03 10:25:36.813186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:08.680 [2024-11-03 10:25:36.813199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.825525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.825569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:08.680 [2024-11-03 10:25:36.825581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.306 ms 00:31:08.680 [2024-11-03 10:25:36.825589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.825731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.825742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:08.680 [2024-11-03 10:25:36.825752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:08.680 [2024-11-03 10:25:36.825759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.825810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.825824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:08.680 [2024-11-03 10:25:36.825833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:08.680 [2024-11-03 10:25:36.825844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.826155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.826167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:08.680 [2024-11-03 10:25:36.826176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:31:08.680 [2024-11-03 10:25:36.826184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.826201] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:08.680 [2024-11-03 10:25:36.826211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.826219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:08.680 [2024-11-03 10:25:36.826267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:08.680 [2024-11-03 10:25:36.826279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.836498] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:08.680 [2024-11-03 10:25:36.836650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.836662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:08.680 [2024-11-03 10:25:36.836674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.351 ms 00:31:08.680 [2024-11-03 10:25:36.836682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.680 [2024-11-03 10:25:36.839069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.680 [2024-11-03 10:25:36.839105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:08.680 [2024-11-03 10:25:36.839120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.357 ms 00:31:08.680 [2024-11-03 10:25:36.839132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.681 [2024-11-03 10:25:36.839259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.681 [2024-11-03 10:25:36.839278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:08.681 [2024-11-03 10:25:36.839288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:08.681 [2024-11-03 10:25:36.839296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.681 [2024-11-03 10:25:36.839325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.681 [2024-11-03 10:25:36.839337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:08.681 [2024-11-03 10:25:36.839346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:08.681 [2024-11-03 10:25:36.839357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.681 [2024-11-03 10:25:36.839395] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:08.681 [2024-11-03 10:25:36.839406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.681 [2024-11-03 10:25:36.839417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:08.681 [2024-11-03 10:25:36.839425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:08.681 [2024-11-03 10:25:36.839433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.681 [2024-11-03 10:25:36.846219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.681 [2024-11-03 10:25:36.846284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:08.681 [2024-11-03 10:25:36.846296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.765 ms 00:31:08.681 [2024-11-03 10:25:36.846305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.681 [2024-11-03 10:25:36.846400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.681 [2024-11-03 10:25:36.846412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:08.681 [2024-11-03 10:25:36.846422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:08.681 [2024-11-03 10:25:36.846435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.681 [2024-11-03 10:25:36.847901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.813 ms, result 0 00:31:09.625  [2024-11-03T10:25:38.931Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-03T10:25:39.871Z] Copying: 34/1024 [MB] (17 MBps) [2024-11-03T10:25:41.251Z] Copying: 55/1024 [MB] (20 MBps) [2024-11-03T10:25:42.184Z] Copying: 70/1024 [MB] (15 MBps) [2024-11-03T10:25:43.118Z] Copying: 92/1024 [MB] (21 MBps) [2024-11-03T10:25:44.078Z] Copying: 108/1024 [MB] (15 MBps) [2024-11-03T10:25:45.061Z] Copying: 126/1024 [MB] (18 MBps) [2024-11-03T10:25:45.995Z] Copying: 143/1024 [MB] (17 MBps) [2024-11-03T10:25:46.930Z] Copying: 159/1024 [MB] (16 MBps) [2024-11-03T10:25:47.865Z] Copying: 174/1024 [MB] (14 MBps) [2024-11-03T10:25:49.241Z] Copying: 190/1024 [MB] (15 MBps) [2024-11-03T10:25:50.197Z] Copying: 205/1024 [MB] (15 MBps) [2024-11-03T10:25:51.131Z] Copying: 218/1024 [MB] (13 MBps) [2024-11-03T10:25:52.066Z] Copying: 231/1024 [MB] (12 MBps) [2024-11-03T10:25:53.001Z] Copying: 245/1024 [MB] (14 MBps) [2024-11-03T10:25:53.943Z] Copying: 260/1024 [MB] (14 MBps) [2024-11-03T10:25:54.877Z] Copying: 271/1024 [MB] (11 MBps) [2024-11-03T10:25:56.252Z] Copying: 287/1024 [MB] (15 MBps) [2024-11-03T10:25:57.186Z] Copying: 301/1024 [MB] (14 MBps) [2024-11-03T10:25:58.121Z] Copying: 316/1024 [MB] (14 MBps) [2024-11-03T10:25:59.056Z] Copying: 331/1024 [MB] (14 MBps) [2024-11-03T10:25:59.990Z] Copying: 346/1024 [MB] (14 MBps) [2024-11-03T10:26:00.924Z] Copying: 360/1024 [MB] (14 MBps) [2024-11-03T10:26:01.901Z] Copying: 374/1024 [MB] (13 MBps) [2024-11-03T10:26:03.282Z] Copying: 389/1024 [MB] (14 MBps) [2024-11-03T10:26:04.222Z] Copying: 402/1024 [MB] (13 MBps) [2024-11-03T10:26:05.158Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-03T10:26:06.093Z] Copying: 426/1024 [MB] (14 MBps) [2024-11-03T10:26:07.035Z] Copying: 440/1024 [MB] (13 MBps) [2024-11-03T10:26:07.970Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-03T10:26:08.906Z] Copying: 466/1024 [MB] (14 MBps) [2024-11-03T10:26:10.297Z] Copying: 480/1024 [MB] (14 MBps) [2024-11-03T10:26:10.869Z] Copying: 495/1024 [MB] (14 MBps) [2024-11-03T10:26:12.246Z] Copying: 506/1024 [MB] (11 MBps) [2024-11-03T10:26:13.182Z] Copying: 520/1024 [MB] (13 MBps) [2024-11-03T10:26:14.119Z] Copying: 534/1024 [MB] (14 MBps) [2024-11-03T10:26:15.055Z] Copying: 548/1024 [MB] (14 MBps) [2024-11-03T10:26:15.993Z] Copying: 562/1024 [MB] (13 MBps) [2024-11-03T10:26:16.936Z] Copying: 576/1024 [MB] (13 MBps) [2024-11-03T10:26:17.872Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-03T10:26:19.288Z] Copying: 600/1024 [MB] (13 MBps) [2024-11-03T10:26:20.222Z] Copying: 613/1024 [MB] (13 MBps) [2024-11-03T10:26:21.158Z] Copying: 627/1024 [MB] (13 MBps) [2024-11-03T10:26:22.093Z] Copying: 640/1024 [MB] (12 MBps) [2024-11-03T10:26:23.031Z] Copying: 653/1024 [MB] (13 MBps) [2024-11-03T10:26:23.973Z] Copying: 667/1024 [MB] (13 MBps) [2024-11-03T10:26:24.913Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-03T10:26:26.289Z] Copying: 689/1024 [MB] (11 MBps) [2024-11-03T10:26:27.231Z] Copying: 703/1024 [MB] (14 MBps) [2024-11-03T10:26:28.175Z] Copying: 716/1024 [MB] (13 MBps) [2024-11-03T10:26:29.112Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-03T10:26:30.057Z] Copying: 739/1024 [MB] (12 MBps) [2024-11-03T10:26:30.997Z] Copying: 749/1024 [MB] (10 MBps) [2024-11-03T10:26:31.933Z] Copying: 760/1024 [MB] (11 MBps) [2024-11-03T10:26:32.869Z] Copying: 773/1024 [MB] (13 MBps) [2024-11-03T10:26:34.255Z] Copying: 786/1024 [MB] (12 MBps) [2024-11-03T10:26:35.191Z] Copying: 796/1024 [MB] (10 MBps) [2024-11-03T10:26:36.152Z] Copying: 808/1024 [MB] (12 MBps) [2024-11-03T10:26:37.109Z] Copying: 820/1024 [MB] (11 MBps) [2024-11-03T10:26:38.052Z] Copying: 831/1024 [MB] (11 MBps) [2024-11-03T10:26:38.994Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-03T10:26:39.938Z] Copying: 853/1024 [MB] (11 MBps) [2024-11-03T10:26:40.879Z] Copying: 869/1024 [MB] (16 MBps) [2024-11-03T10:26:42.265Z] Copying: 886/1024 [MB] (16 MBps) [2024-11-03T10:26:43.210Z] Copying: 903/1024 [MB] (17 MBps) [2024-11-03T10:26:44.156Z] Copying: 933/1024 [MB] (29 MBps) [2024-11-03T10:26:45.100Z] Copying: 961/1024 [MB] (28 MBps) [2024-11-03T10:26:46.044Z] Copying: 975/1024 [MB] (14 MBps) [2024-11-03T10:26:46.989Z] Copying: 993/1024 [MB] (17 MBps) [2024-11-03T10:26:47.563Z] Copying: 1023/1024 [MB] (30 MBps) [2024-11-03T10:26:47.563Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 10:26:47.498327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.201 [2024-11-03 10:26:47.498390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:19.201 [2024-11-03 10:26:47.498402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:19.201 [2024-11-03 10:26:47.498409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.201 [2024-11-03 10:26:47.501290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:19.201 [2024-11-03 10:26:47.502678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.201 [2024-11-03 10:26:47.502710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:19.201 [2024-11-03 10:26:47.502718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:32:19.201 [2024-11-03 10:26:47.502725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.201 [2024-11-03 10:26:47.509948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.201 [2024-11-03 10:26:47.509973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:19.201 [2024-11-03 10:26:47.509985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.595 ms 00:32:19.201 [2024-11-03 10:26:47.509991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.201 [2024-11-03 10:26:47.510016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.201 [2024-11-03 10:26:47.510023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:19.201 [2024-11-03 10:26:47.510034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:19.201 [2024-11-03 10:26:47.510040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.201 [2024-11-03 10:26:47.510079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.201 [2024-11-03 10:26:47.510086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:19.201 [2024-11-03 10:26:47.510093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:19.201 [2024-11-03 10:26:47.510100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.201 [2024-11-03 10:26:47.510111] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:19.201 [2024-11-03 10:26:47.510122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:32:19.201 [2024-11-03 10:26:47.510130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:19.201 [2024-11-03 10:26:47.510318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:19.202 [2024-11-03 10:26:47.510744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:19.202 [2024-11-03 10:26:47.510755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6 00:32:19.202 [2024-11-03 10:26:47.510763] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:32:19.202 [2024-11-03 10:26:47.510769] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129312 00:32:19.202 [2024-11-03 10:26:47.510775] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:32:19.202 [2024-11-03 10:26:47.510781] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:19.202 [2024-11-03 10:26:47.510788] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:19.202 [2024-11-03 10:26:47.510795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:19.202 [2024-11-03 10:26:47.510802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:19.202 [2024-11-03 10:26:47.510807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:19.202 [2024-11-03 10:26:47.510812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:19.203 [2024-11-03 10:26:47.510818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.203 [2024-11-03 10:26:47.510824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:19.203 [2024-11-03 10:26:47.510830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:32:19.203 [2024-11-03 10:26:47.510836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.512102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.203 [2024-11-03 10:26:47.512119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:19.203 [2024-11-03 10:26:47.512126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:32:19.203 [2024-11-03 10:26:47.512132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.512200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.203 [2024-11-03 10:26:47.512210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:19.203 [2024-11-03 10:26:47.512242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:19.203 [2024-11-03 10:26:47.512248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.515931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.515957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:19.203 [2024-11-03 10:26:47.515967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.515973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.516012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.516018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:19.203 [2024-11-03 10:26:47.516024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.516033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.516055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.516062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:19.203 [2024-11-03 10:26:47.516068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.516074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.516087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.516094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:19.203 [2024-11-03 10:26:47.516099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.516105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.523441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.523472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:19.203 [2024-11-03 10:26:47.523480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.523489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.529857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.529886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:19.203 [2024-11-03 10:26:47.529894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.529900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.529930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.529937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:19.203 [2024-11-03 10:26:47.529944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.529950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.529974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.529980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:19.203 [2024-11-03 10:26:47.529990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.529996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.530034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.530044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:19.203 [2024-11-03 10:26:47.530050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.530056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.530073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.530081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:19.203 [2024-11-03 10:26:47.530087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.530093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.530119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.530126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:19.203 [2024-11-03 10:26:47.530132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.530138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.530170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.203 [2024-11-03 10:26:47.530178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:19.203 [2024-11-03 10:26:47.530185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.203 [2024-11-03 10:26:47.530191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.203 [2024-11-03 10:26:47.530484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 33.393 ms, result 0 00:32:20.590 00:32:20.590 00:32:20.590 10:26:48 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:20.590 [2024-11-03 10:26:48.637973] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:20.590 [2024-11-03 10:26:48.638092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95157 ] 00:32:20.590 [2024-11-03 10:26:48.775203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.590 [2024-11-03 10:26:48.823721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.590 [2024-11-03 10:26:48.937762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:20.590 [2024-11-03 10:26:48.937849] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:20.854 [2024-11-03 10:26:49.098409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.098475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:20.854 [2024-11-03 10:26:49.098492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:20.854 [2024-11-03 10:26:49.098501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.098558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.098570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:20.854 [2024-11-03 10:26:49.098582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:20.854 [2024-11-03 10:26:49.098596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.098618] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:20.854 [2024-11-03 10:26:49.098892] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:20.854 [2024-11-03 10:26:49.098909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.098917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:20.854 [2024-11-03 10:26:49.098927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:32:20.854 [2024-11-03 10:26:49.098935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.099219] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:20.854 [2024-11-03 10:26:49.099272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.099286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:20.854 [2024-11-03 10:26:49.099296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:20.854 [2024-11-03 10:26:49.099305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.099359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.099372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:20.854 [2024-11-03 10:26:49.099384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:20.854 [2024-11-03 10:26:49.099391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.099683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.099698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:20.854 [2024-11-03 10:26:49.099707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:32:20.854 [2024-11-03 10:26:49.099722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.099803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.099816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:20.854 [2024-11-03 10:26:49.099824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:20.854 [2024-11-03 10:26:49.099835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.099857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.099866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:20.854 [2024-11-03 10:26:49.099875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:20.854 [2024-11-03 10:26:49.099883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.099908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:20.854 [2024-11-03 10:26:49.102006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.102093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:20.854 [2024-11-03 10:26:49.102107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:32:20.854 [2024-11-03 10:26:49.102114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.102149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.102169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:20.854 [2024-11-03 10:26:49.102180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:20.854 [2024-11-03 10:26:49.102187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.102222] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:20.854 [2024-11-03 10:26:49.102269] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:20.854 [2024-11-03 10:26:49.102316] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:20.854 [2024-11-03 10:26:49.102332] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:20.854 [2024-11-03 10:26:49.102439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:20.854 [2024-11-03 10:26:49.102454] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:20.854 [2024-11-03 10:26:49.102464] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:20.854 [2024-11-03 10:26:49.102475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:20.854 [2024-11-03 10:26:49.102485] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:20.854 [2024-11-03 10:26:49.102493] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:20.854 [2024-11-03 10:26:49.102503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:20.854 [2024-11-03 10:26:49.102511] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:20.854 [2024-11-03 10:26:49.102521] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:20.854 [2024-11-03 10:26:49.102529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.102537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:20.854 [2024-11-03 10:26:49.102544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:32:20.854 [2024-11-03 10:26:49.102552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.102650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.854 [2024-11-03 10:26:49.102659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:20.854 [2024-11-03 10:26:49.102672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:20.854 [2024-11-03 10:26:49.102683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.854 [2024-11-03 10:26:49.102787] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:20.854 [2024-11-03 10:26:49.102803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:20.854 [2024-11-03 10:26:49.102812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:20.854 [2024-11-03 10:26:49.102821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.854 [2024-11-03 10:26:49.102830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:20.854 [2024-11-03 10:26:49.102840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:20.854 [2024-11-03 10:26:49.102851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:20.854 [2024-11-03 10:26:49.102859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:20.854 [2024-11-03 10:26:49.102867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:20.854 [2024-11-03 10:26:49.102875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:20.854 [2024-11-03 10:26:49.102883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:20.854 [2024-11-03 10:26:49.102894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:20.854 [2024-11-03 10:26:49.102902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:20.854 [2024-11-03 10:26:49.102911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:20.854 [2024-11-03 10:26:49.102919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:20.854 [2024-11-03 10:26:49.102927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.854 [2024-11-03 10:26:49.102935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:20.854 [2024-11-03 10:26:49.102943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:20.854 [2024-11-03 10:26:49.102950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.854 [2024-11-03 10:26:49.102959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:20.854 [2024-11-03 10:26:49.102967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:20.854 [2024-11-03 10:26:49.102975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:20.854 [2024-11-03 10:26:49.102986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:20.854 [2024-11-03 10:26:49.102995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:20.854 [2024-11-03 10:26:49.103003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:20.854 [2024-11-03 10:26:49.103011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:20.854 [2024-11-03 10:26:49.103019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:20.854 [2024-11-03 10:26:49.103026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:20.854 [2024-11-03 10:26:49.103034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:20.854 [2024-11-03 10:26:49.103041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:20.854 [2024-11-03 10:26:49.103049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:20.854 [2024-11-03 10:26:49.103057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:20.854 [2024-11-03 10:26:49.103065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:20.854 [2024-11-03 10:26:49.103073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:20.855 [2024-11-03 10:26:49.103081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:20.855 [2024-11-03 10:26:49.103088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:20.855 [2024-11-03 10:26:49.103096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:20.855 [2024-11-03 10:26:49.103102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:20.855 [2024-11-03 10:26:49.103111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:20.855 [2024-11-03 10:26:49.103118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.855 [2024-11-03 10:26:49.103125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:20.855 [2024-11-03 10:26:49.103131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:20.855 [2024-11-03 10:26:49.103137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.855 [2024-11-03 10:26:49.103145] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:20.855 [2024-11-03 10:26:49.103156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:20.855 [2024-11-03 10:26:49.103165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:20.855 [2024-11-03 10:26:49.103173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.855 [2024-11-03 10:26:49.103181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:20.855 [2024-11-03 10:26:49.103188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:20.855 [2024-11-03 10:26:49.103195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:20.855 [2024-11-03 10:26:49.103202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:20.855 [2024-11-03 10:26:49.103208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:20.855 [2024-11-03 10:26:49.103215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:20.855 [2024-11-03 10:26:49.103245] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:20.855 [2024-11-03 10:26:49.103260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:20.855 [2024-11-03 10:26:49.103269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:20.855 [2024-11-03 10:26:49.103276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:20.855 [2024-11-03 10:26:49.103284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:20.855 [2024-11-03 10:26:49.103291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:20.855 [2024-11-03 10:26:49.103298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:20.855 [2024-11-03 10:26:49.103305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:20.855 [2024-11-03 10:26:49.103313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:20.855 [2024-11-03 10:26:49.103320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:20.855 [2024-11-03 10:26:49.103327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:20.855 [2024-11-03 10:26:49.103334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:20.855 [2024-11-03 10:26:49.103342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:20.855 [2024-11-03 10:26:49.103355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:20.855 [2024-11-03 10:26:49.103362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:20.855 [2024-11-03 10:26:49.103370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:20.855 [2024-11-03 10:26:49.103378] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:20.855 [2024-11-03 10:26:49.103388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:20.855 [2024-11-03 10:26:49.103397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:20.855 [2024-11-03 10:26:49.103404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:20.855 [2024-11-03 10:26:49.103411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:20.855 [2024-11-03 10:26:49.103419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:20.855 [2024-11-03 10:26:49.103428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.103435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:20.855 [2024-11-03 10:26:49.103446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:32:20.855 [2024-11-03 10:26:49.103454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.120379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.120579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:20.855 [2024-11-03 10:26:49.120605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.879 ms 00:32:20.855 [2024-11-03 10:26:49.120615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.120712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.120721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:20.855 [2024-11-03 10:26:49.120730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:20.855 [2024-11-03 10:26:49.120738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.132171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.132376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:20.855 [2024-11-03 10:26:49.132400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.369 ms 00:32:20.855 [2024-11-03 10:26:49.132414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.132452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.132460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:20.855 [2024-11-03 10:26:49.132470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:20.855 [2024-11-03 10:26:49.132477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.132575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.132586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:20.855 [2024-11-03 10:26:49.132596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:20.855 [2024-11-03 10:26:49.132612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.132736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.132745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:20.855 [2024-11-03 10:26:49.132757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:32:20.855 [2024-11-03 10:26:49.132765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.139462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.139607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:20.855 [2024-11-03 10:26:49.139629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.675 ms 00:32:20.855 [2024-11-03 10:26:49.139637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.139753] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:20.855 [2024-11-03 10:26:49.139766] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:20.855 [2024-11-03 10:26:49.139776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.139785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:20.855 [2024-11-03 10:26:49.139798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:20.855 [2024-11-03 10:26:49.139806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.152102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.152146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:20.855 [2024-11-03 10:26:49.152157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.280 ms 00:32:20.855 [2024-11-03 10:26:49.152164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.152336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.152348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:20.855 [2024-11-03 10:26:49.152360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:32:20.855 [2024-11-03 10:26:49.152368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.152418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.152430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:20.855 [2024-11-03 10:26:49.152438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:20.855 [2024-11-03 10:26:49.152448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.152751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.152762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:20.855 [2024-11-03 10:26:49.152771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:32:20.855 [2024-11-03 10:26:49.152778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.152792] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:20.855 [2024-11-03 10:26:49.152801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.855 [2024-11-03 10:26:49.152809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:20.855 [2024-11-03 10:26:49.152822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:20.855 [2024-11-03 10:26:49.152838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.855 [2024-11-03 10:26:49.162117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:20.855 [2024-11-03 10:26:49.162301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.856 [2024-11-03 10:26:49.162313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:20.856 [2024-11-03 10:26:49.162328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.445 ms 00:32:20.856 [2024-11-03 10:26:49.162336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.856 [2024-11-03 10:26:49.164922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.856 [2024-11-03 10:26:49.164958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:20.856 [2024-11-03 10:26:49.164968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:32:20.856 [2024-11-03 10:26:49.164975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.856 [2024-11-03 10:26:49.165052] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:20.856 [2024-11-03 10:26:49.165661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.856 [2024-11-03 10:26:49.165684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:20.856 [2024-11-03 10:26:49.165694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:32:20.856 [2024-11-03 10:26:49.165702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.856 [2024-11-03 10:26:49.165738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.856 [2024-11-03 10:26:49.165753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:20.856 [2024-11-03 10:26:49.165761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:20.856 [2024-11-03 10:26:49.165768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.856 [2024-11-03 10:26:49.165801] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:20.856 [2024-11-03 10:26:49.165814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.856 [2024-11-03 10:26:49.165822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:20.856 [2024-11-03 10:26:49.165831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:20.856 [2024-11-03 10:26:49.165838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.856 [2024-11-03 10:26:49.171971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.856 [2024-11-03 10:26:49.172025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:20.856 [2024-11-03 10:26:49.172035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:32:20.856 [2024-11-03 10:26:49.172043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.856 [2024-11-03 10:26:49.172138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.856 [2024-11-03 10:26:49.172149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:20.856 [2024-11-03 10:26:49.172158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:20.856 [2024-11-03 10:26:49.172165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.856 [2024-11-03 10:26:49.173615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 74.533 ms, result 0 00:32:22.264  [2024-11-03T10:26:51.572Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-03T10:26:52.517Z] Copying: 33/1024 [MB] (19 MBps) [2024-11-03T10:26:53.495Z] Copying: 49/1024 [MB] (16 MBps) [2024-11-03T10:26:54.439Z] Copying: 66/1024 [MB] (16 MBps) [2024-11-03T10:26:55.383Z] Copying: 85/1024 [MB] (19 MBps) [2024-11-03T10:26:56.772Z] Copying: 103/1024 [MB] (18 MBps) [2024-11-03T10:26:57.717Z] Copying: 116/1024 [MB] (12 MBps) [2024-11-03T10:26:58.662Z] Copying: 127/1024 [MB] (10 MBps) [2024-11-03T10:26:59.611Z] Copying: 139/1024 [MB] (11 MBps) [2024-11-03T10:27:00.553Z] Copying: 150/1024 [MB] (11 MBps) [2024-11-03T10:27:01.496Z] Copying: 161/1024 [MB] (11 MBps) [2024-11-03T10:27:02.439Z] Copying: 172/1024 [MB] (10 MBps) [2024-11-03T10:27:03.383Z] Copying: 189/1024 [MB] (16 MBps) [2024-11-03T10:27:04.772Z] Copying: 200/1024 [MB] (11 MBps) [2024-11-03T10:27:05.717Z] Copying: 214/1024 [MB] (13 MBps) [2024-11-03T10:27:06.662Z] Copying: 227/1024 [MB] (13 MBps) [2024-11-03T10:27:07.607Z] Copying: 246/1024 [MB] (18 MBps) [2024-11-03T10:27:08.549Z] Copying: 263/1024 [MB] (17 MBps) [2024-11-03T10:27:09.495Z] Copying: 279/1024 [MB] (16 MBps) [2024-11-03T10:27:10.443Z] Copying: 301/1024 [MB] (21 MBps) [2024-11-03T10:27:11.449Z] Copying: 320/1024 [MB] (19 MBps) [2024-11-03T10:27:12.395Z] Copying: 336/1024 [MB] (15 MBps) [2024-11-03T10:27:13.781Z] Copying: 352/1024 [MB] (15 MBps) [2024-11-03T10:27:14.725Z] Copying: 367/1024 [MB] (15 MBps) [2024-11-03T10:27:15.670Z] Copying: 384/1024 [MB] (16 MBps) [2024-11-03T10:27:16.615Z] Copying: 405/1024 [MB] (20 MBps) [2024-11-03T10:27:17.558Z] Copying: 421/1024 [MB] (16 MBps) [2024-11-03T10:27:18.503Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-03T10:27:19.453Z] Copying: 447/1024 [MB] (14 MBps) [2024-11-03T10:27:20.396Z] Copying: 458/1024 [MB] (11 MBps) [2024-11-03T10:27:21.783Z] Copying: 473/1024 [MB] (14 MBps) [2024-11-03T10:27:22.728Z] Copying: 486/1024 [MB] (12 MBps) [2024-11-03T10:27:23.670Z] Copying: 503/1024 [MB] (16 MBps) [2024-11-03T10:27:24.614Z] Copying: 513/1024 [MB] (10 MBps) [2024-11-03T10:27:25.557Z] Copying: 533/1024 [MB] (19 MBps) [2024-11-03T10:27:26.501Z] Copying: 550/1024 [MB] (17 MBps) [2024-11-03T10:27:27.479Z] Copying: 567/1024 [MB] (16 MBps) [2024-11-03T10:27:28.423Z] Copying: 586/1024 [MB] (19 MBps) [2024-11-03T10:27:29.366Z] Copying: 607/1024 [MB] (21 MBps) [2024-11-03T10:27:30.753Z] Copying: 625/1024 [MB] (17 MBps) [2024-11-03T10:27:31.698Z] Copying: 640/1024 [MB] (15 MBps) [2024-11-03T10:27:32.645Z] Copying: 659/1024 [MB] (18 MBps) [2024-11-03T10:27:33.590Z] Copying: 679/1024 [MB] (20 MBps) [2024-11-03T10:27:34.536Z] Copying: 693/1024 [MB] (13 MBps) [2024-11-03T10:27:35.481Z] Copying: 707/1024 [MB] (14 MBps) [2024-11-03T10:27:36.424Z] Copying: 722/1024 [MB] (15 MBps) [2024-11-03T10:27:37.368Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-03T10:27:38.756Z] Copying: 747/1024 [MB] (13 MBps) [2024-11-03T10:27:39.702Z] Copying: 760/1024 [MB] (13 MBps) [2024-11-03T10:27:40.645Z] Copying: 773/1024 [MB] (12 MBps) [2024-11-03T10:27:41.589Z] Copying: 795/1024 [MB] (21 MBps) [2024-11-03T10:27:42.533Z] Copying: 815/1024 [MB] (20 MBps) [2024-11-03T10:27:43.481Z] Copying: 834/1024 [MB] (19 MBps) [2024-11-03T10:27:44.444Z] Copying: 854/1024 [MB] (19 MBps) [2024-11-03T10:27:45.423Z] Copying: 872/1024 [MB] (17 MBps) [2024-11-03T10:27:46.366Z] Copying: 889/1024 [MB] (17 MBps) [2024-11-03T10:27:47.752Z] Copying: 903/1024 [MB] (14 MBps) [2024-11-03T10:27:48.696Z] Copying: 927/1024 [MB] (24 MBps) [2024-11-03T10:27:49.639Z] Copying: 941/1024 [MB] (13 MBps) [2024-11-03T10:27:50.579Z] Copying: 965/1024 [MB] (24 MBps) [2024-11-03T10:27:51.525Z] Copying: 983/1024 [MB] (17 MBps) [2024-11-03T10:27:52.471Z] Copying: 1003/1024 [MB] (20 MBps) [2024-11-03T10:27:52.471Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-03 10:27:52.265873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.109 [2024-11-03 10:27:52.266298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:24.109 [2024-11-03 10:27:52.266524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:24.109 [2024-11-03 10:27:52.266587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.109 [2024-11-03 10:27:52.266671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:24.109 [2024-11-03 10:27:52.267828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.109 [2024-11-03 10:27:52.268017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:24.109 [2024-11-03 10:27:52.268135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:33:24.109 [2024-11-03 10:27:52.268319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.109 [2024-11-03 10:27:52.268883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.109 [2024-11-03 10:27:52.269051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:24.109 [2024-11-03 10:27:52.269148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:33:24.109 [2024-11-03 10:27:52.269197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.109 [2024-11-03 10:27:52.269433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.109 [2024-11-03 10:27:52.269492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:24.109 [2024-11-03 10:27:52.269613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:24.109 [2024-11-03 10:27:52.269663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.109 [2024-11-03 10:27:52.269782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.109 [2024-11-03 10:27:52.269840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:24.109 [2024-11-03 10:27:52.269944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:33:24.109 [2024-11-03 10:27:52.271484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.109 [2024-11-03 10:27:52.271559] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:24.109 [2024-11-03 10:27:52.271705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:24.109 [2024-11-03 10:27:52.271802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.271946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.271976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:24.109 [2024-11-03 10:27:52.272874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.272998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:24.110 [2024-11-03 10:27:52.273935] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:24.110 [2024-11-03 10:27:52.273950] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6 00:33:24.110 [2024-11-03 10:27:52.273958] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:24.110 [2024-11-03 10:27:52.273965] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1824 00:33:24.110 [2024-11-03 10:27:52.273977] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1792 00:33:24.110 [2024-11-03 10:27:52.273985] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0179 00:33:24.110 [2024-11-03 10:27:52.273993] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:24.110 [2024-11-03 10:27:52.274001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:24.110 [2024-11-03 10:27:52.274017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:24.110 [2024-11-03 10:27:52.274024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:24.110 [2024-11-03 10:27:52.274030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:24.110 [2024-11-03 10:27:52.274040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.110 [2024-11-03 10:27:52.274424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:24.110 [2024-11-03 10:27:52.274435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.483 ms 00:33:24.110 [2024-11-03 10:27:52.274442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.110 [2024-11-03 10:27:52.276480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.110 [2024-11-03 10:27:52.276507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:24.110 [2024-11-03 10:27:52.276518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:33:24.110 [2024-11-03 10:27:52.276531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.110 [2024-11-03 10:27:52.276629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.110 [2024-11-03 10:27:52.276638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:24.110 [2024-11-03 10:27:52.276648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:33:24.110 [2024-11-03 10:27:52.276656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.110 [2024-11-03 10:27:52.282503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.110 [2024-11-03 10:27:52.282530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:24.110 [2024-11-03 10:27:52.282543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.110 [2024-11-03 10:27:52.282550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.110 [2024-11-03 10:27:52.282602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.110 [2024-11-03 10:27:52.282611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:24.110 [2024-11-03 10:27:52.282619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.110 [2024-11-03 10:27:52.282626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.110 [2024-11-03 10:27:52.282658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.110 [2024-11-03 10:27:52.282667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:24.110 [2024-11-03 10:27:52.282675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.110 [2024-11-03 10:27:52.282684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.110 [2024-11-03 10:27:52.282704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.110 [2024-11-03 10:27:52.282712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:24.110 [2024-11-03 10:27:52.282720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.110 [2024-11-03 10:27:52.282730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.110 [2024-11-03 10:27:52.295091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.110 [2024-11-03 10:27:52.295126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:24.110 [2024-11-03 10:27:52.295141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.110 [2024-11-03 10:27:52.295149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.111 [2024-11-03 10:27:52.305881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.111 [2024-11-03 10:27:52.305918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:24.111 [2024-11-03 10:27:52.305929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.111 [2024-11-03 10:27:52.305937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.111 [2024-11-03 10:27:52.305993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.111 [2024-11-03 10:27:52.306008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:24.111 [2024-11-03 10:27:52.306017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.111 [2024-11-03 10:27:52.306025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.111 [2024-11-03 10:27:52.306063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.111 [2024-11-03 10:27:52.306073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:24.111 [2024-11-03 10:27:52.306081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.111 [2024-11-03 10:27:52.306088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.111 [2024-11-03 10:27:52.306144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.111 [2024-11-03 10:27:52.306153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:24.111 [2024-11-03 10:27:52.306164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.111 [2024-11-03 10:27:52.306172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.111 [2024-11-03 10:27:52.306202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.111 [2024-11-03 10:27:52.306211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:24.111 [2024-11-03 10:27:52.306260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.111 [2024-11-03 10:27:52.306269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.111 [2024-11-03 10:27:52.306313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.111 [2024-11-03 10:27:52.306323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:24.111 [2024-11-03 10:27:52.306331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.111 [2024-11-03 10:27:52.306339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.111 [2024-11-03 10:27:52.306390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:24.111 [2024-11-03 10:27:52.306400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:24.111 [2024-11-03 10:27:52.306408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:24.111 [2024-11-03 10:27:52.306416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.111 [2024-11-03 10:27:52.306554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.666 ms, result 0 00:33:24.372 00:33:24.372 00:33:24.372 10:27:52 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:26.918 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:26.918 Process with pid 92803 is not found 00:33:26.918 Remove shared memory files 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92803 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92803 ']' 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92803 00:33:26.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92803) - No such process 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 92803 is not found' 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_band_md /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_l2p_l1 /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_l2p_l2 /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_l2p_l2_ctx /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_nvc_md /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_p2l_pool /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_sb /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_sb_shm /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_trim_bitmap /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_trim_log /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_trim_md /dev/hugepages/ftl_48b2e6c7-dfb6-4a73-bcbf-b5687b8e15f6_vmap 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:26.918 10:27:54 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:26.919 10:27:54 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:26.919 00:33:26.919 real 4m59.886s 00:33:26.919 user 4m46.662s 00:33:26.919 sys 0m12.883s 00:33:26.919 10:27:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:26.919 ************************************ 00:33:26.919 END TEST ftl_restore_fast 00:33:26.919 ************************************ 00:33:26.919 10:27:54 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:26.919 10:27:54 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:26.919 10:27:54 ftl -- ftl/ftl.sh@14 -- # killprocess 83853 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@950 -- # '[' -z 83853 ']' 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@954 -- # kill -0 83853 00:33:26.919 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83853) - No such process 00:33:26.919 Process with pid 83853 is not found 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 83853 is not found' 00:33:26.919 10:27:54 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:26.919 10:27:54 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95863 00:33:26.919 10:27:54 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95863 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@831 -- # '[' -z 95863 ']' 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:26.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.919 10:27:54 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:26.919 10:27:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:26.919 [2024-11-03 10:27:54.996284] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:26.919 [2024-11-03 10:27:54.996439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95863 ] 00:33:26.919 [2024-11-03 10:27:55.134736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.919 [2024-11-03 10:27:55.184104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.491 10:27:55 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:27.491 10:27:55 ftl -- common/autotest_common.sh@864 -- # return 0 00:33:27.491 10:27:55 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:27.753 nvme0n1 00:33:28.014 10:27:56 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:28.014 10:27:56 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:28.014 10:27:56 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:28.014 10:27:56 ftl -- ftl/common.sh@28 -- # stores=aaac26bd-af4f-487b-ab58-e84255148848 00:33:28.014 10:27:56 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:28.014 10:27:56 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aaac26bd-af4f-487b-ab58-e84255148848 00:33:28.275 10:27:56 ftl -- ftl/ftl.sh@23 -- # killprocess 95863 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@950 -- # '[' -z 95863 ']' 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@954 -- # kill -0 95863 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@955 -- # uname 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95863 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:28.275 killing process with pid 95863 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95863' 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@969 -- # kill 95863 00:33:28.275 10:27:56 ftl -- common/autotest_common.sh@974 -- # wait 95863 00:33:28.847 10:27:57 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:29.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:29.109 Waiting for block devices as requested 00:33:29.109 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:29.370 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:29.370 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:29.370 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:34.661 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:34.661 10:28:02 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:34.661 Remove shared memory files 00:33:34.661 10:28:02 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:34.661 10:28:02 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:34.661 10:28:02 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:34.661 10:28:02 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:34.661 10:28:02 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:34.661 10:28:02 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:34.661 00:33:34.661 real 18m7.557s 00:33:34.661 user 20m13.944s 00:33:34.661 sys 1m24.933s 00:33:34.661 10:28:02 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:34.661 ************************************ 00:33:34.661 END TEST ftl 00:33:34.661 ************************************ 00:33:34.661 10:28:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:34.661 10:28:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:34.661 10:28:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:34.661 10:28:02 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:34.661 10:28:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:34.661 10:28:02 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:34.661 10:28:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:34.661 10:28:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:34.661 10:28:02 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:34.661 10:28:02 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:34.661 10:28:02 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:34.661 10:28:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:34.661 10:28:02 -- common/autotest_common.sh@10 -- # set +x 00:33:34.661 10:28:02 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:34.661 10:28:02 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:34.661 10:28:02 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:34.661 10:28:02 -- common/autotest_common.sh@10 -- # set +x 00:33:36.097 INFO: APP EXITING 00:33:36.097 INFO: killing all VMs 00:33:36.097 INFO: killing vhost app 00:33:36.097 INFO: EXIT DONE 00:33:36.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:36.930 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:36.930 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:36.930 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:36.930 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:37.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:37.765 Cleaning 00:33:37.765 Removing: /var/run/dpdk/spdk0/config 00:33:37.765 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:37.765 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:37.765 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:37.765 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:37.765 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:37.765 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:37.765 Removing: /var/run/dpdk/spdk0 00:33:37.765 Removing: /var/run/dpdk/spdk_pid69302 00:33:37.765 Removing: /var/run/dpdk/spdk_pid69460 00:33:37.765 Removing: /var/run/dpdk/spdk_pid69662 00:33:37.765 Removing: /var/run/dpdk/spdk_pid69748 00:33:37.765 Removing: /var/run/dpdk/spdk_pid69772 00:33:37.765 Removing: /var/run/dpdk/spdk_pid69884 00:33:37.765 Removing: /var/run/dpdk/spdk_pid69896 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70079 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70152 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70232 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70326 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70407 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70446 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70483 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70548 00:33:37.765 Removing: /var/run/dpdk/spdk_pid70654 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71079 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71121 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71173 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71183 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71240 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71252 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71310 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71326 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71368 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71386 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71428 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71446 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71573 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71615 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71693 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71854 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71927 00:33:37.765 Removing: /var/run/dpdk/spdk_pid71958 00:33:37.765 Removing: /var/run/dpdk/spdk_pid72380 00:33:37.765 Removing: /var/run/dpdk/spdk_pid72467 00:33:37.765 Removing: /var/run/dpdk/spdk_pid72573 00:33:37.765 Removing: /var/run/dpdk/spdk_pid72609 00:33:37.765 Removing: /var/run/dpdk/spdk_pid72635 00:33:37.765 Removing: /var/run/dpdk/spdk_pid72708 00:33:37.765 Removing: /var/run/dpdk/spdk_pid73326 00:33:37.765 Removing: /var/run/dpdk/spdk_pid73352 00:33:37.765 Removing: /var/run/dpdk/spdk_pid73802 00:33:37.765 Removing: /var/run/dpdk/spdk_pid73890 00:33:37.765 Removing: /var/run/dpdk/spdk_pid73992 00:33:37.765 Removing: /var/run/dpdk/spdk_pid74033 00:33:37.765 Removing: /var/run/dpdk/spdk_pid74054 00:33:37.765 Removing: /var/run/dpdk/spdk_pid74074 00:33:37.765 Removing: /var/run/dpdk/spdk_pid75910 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76025 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76029 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76041 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76091 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76095 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76107 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76146 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76150 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76162 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76201 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76205 00:33:37.765 Removing: /var/run/dpdk/spdk_pid76217 00:33:37.765 Removing: /var/run/dpdk/spdk_pid77580 00:33:37.765 Removing: /var/run/dpdk/spdk_pid77674 00:33:37.765 Removing: /var/run/dpdk/spdk_pid79068 00:33:37.765 Removing: /var/run/dpdk/spdk_pid80459 00:33:37.765 Removing: /var/run/dpdk/spdk_pid80522 00:33:37.765 Removing: /var/run/dpdk/spdk_pid80577 00:33:37.765 Removing: /var/run/dpdk/spdk_pid80631 00:33:37.765 Removing: /var/run/dpdk/spdk_pid80708 00:33:37.765 Removing: /var/run/dpdk/spdk_pid80772 00:33:37.765 Removing: /var/run/dpdk/spdk_pid80909 00:33:37.765 Removing: /var/run/dpdk/spdk_pid81256 00:33:37.765 Removing: /var/run/dpdk/spdk_pid81276 00:33:37.765 Removing: /var/run/dpdk/spdk_pid81725 00:33:37.765 Removing: /var/run/dpdk/spdk_pid81902 00:33:37.765 Removing: /var/run/dpdk/spdk_pid81992 00:33:37.765 Removing: /var/run/dpdk/spdk_pid82100 00:33:37.765 Removing: /var/run/dpdk/spdk_pid82142 00:33:37.765 Removing: /var/run/dpdk/spdk_pid82163 00:33:37.765 Removing: /var/run/dpdk/spdk_pid82453 00:33:37.765 Removing: /var/run/dpdk/spdk_pid82491 00:33:37.765 Removing: /var/run/dpdk/spdk_pid82551 00:33:37.765 Removing: /var/run/dpdk/spdk_pid82921 00:33:37.765 Removing: /var/run/dpdk/spdk_pid83058 00:33:37.765 Removing: /var/run/dpdk/spdk_pid83853 00:33:37.765 Removing: /var/run/dpdk/spdk_pid83974 00:33:37.765 Removing: /var/run/dpdk/spdk_pid84133 00:33:37.765 Removing: /var/run/dpdk/spdk_pid84236 00:33:37.765 Removing: /var/run/dpdk/spdk_pid84540 00:33:37.765 Removing: /var/run/dpdk/spdk_pid84821 00:33:37.765 Removing: /var/run/dpdk/spdk_pid85167 00:33:37.765 Removing: /var/run/dpdk/spdk_pid85333 00:33:37.765 Removing: /var/run/dpdk/spdk_pid85490 00:33:37.765 Removing: /var/run/dpdk/spdk_pid85532 00:33:37.765 Removing: /var/run/dpdk/spdk_pid85738 00:33:37.765 Removing: /var/run/dpdk/spdk_pid85752 00:33:37.765 Removing: /var/run/dpdk/spdk_pid85788 00:33:37.765 Removing: /var/run/dpdk/spdk_pid86063 00:33:37.765 Removing: /var/run/dpdk/spdk_pid86282 00:33:38.027 Removing: /var/run/dpdk/spdk_pid86915 00:33:38.027 Removing: /var/run/dpdk/spdk_pid87665 00:33:38.027 Removing: /var/run/dpdk/spdk_pid88205 00:33:38.027 Removing: /var/run/dpdk/spdk_pid89051 00:33:38.027 Removing: /var/run/dpdk/spdk_pid89193 00:33:38.027 Removing: /var/run/dpdk/spdk_pid89263 00:33:38.027 Removing: /var/run/dpdk/spdk_pid89842 00:33:38.027 Removing: /var/run/dpdk/spdk_pid89896 00:33:38.027 Removing: /var/run/dpdk/spdk_pid90542 00:33:38.027 Removing: /var/run/dpdk/spdk_pid91029 00:33:38.027 Removing: /var/run/dpdk/spdk_pid91869 00:33:38.027 Removing: /var/run/dpdk/spdk_pid91992 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92023 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92076 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92126 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92173 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92360 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92429 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92490 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92546 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92575 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92635 00:33:38.027 Removing: /var/run/dpdk/spdk_pid92803 00:33:38.027 Removing: /var/run/dpdk/spdk_pid93006 00:33:38.027 Removing: /var/run/dpdk/spdk_pid93720 00:33:38.027 Removing: /var/run/dpdk/spdk_pid94427 00:33:38.027 Removing: /var/run/dpdk/spdk_pid95157 00:33:38.027 Removing: /var/run/dpdk/spdk_pid95863 00:33:38.027 Clean 00:33:38.027 10:28:06 -- common/autotest_common.sh@1451 -- # return 0 00:33:38.027 10:28:06 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:38.027 10:28:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:38.027 10:28:06 -- common/autotest_common.sh@10 -- # set +x 00:33:38.027 10:28:06 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:38.027 10:28:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:38.027 10:28:06 -- common/autotest_common.sh@10 -- # set +x 00:33:38.027 10:28:06 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:38.027 10:28:06 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:38.027 10:28:06 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:38.027 10:28:06 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:38.027 10:28:06 -- spdk/autotest.sh@394 -- # hostname 00:33:38.028 10:28:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:38.289 geninfo: WARNING: invalid characters removed from testname! 00:34:04.872 10:28:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:06.783 10:28:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:08.693 10:28:36 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:10.606 10:28:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:13.153 10:28:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:14.537 10:28:42 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:17.125 10:28:44 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:17.125 10:28:45 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:34:17.126 10:28:45 -- common/autotest_common.sh@1681 -- $ lcov --version 00:34:17.126 10:28:45 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:34:17.126 10:28:45 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:34:17.126 10:28:45 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:34:17.126 10:28:45 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:34:17.126 10:28:45 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:34:17.126 10:28:45 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:17.126 10:28:45 -- scripts/common.sh@336 -- $ read -ra ver1 00:34:17.126 10:28:45 -- scripts/common.sh@337 -- $ IFS=.-: 00:34:17.126 10:28:45 -- scripts/common.sh@337 -- $ read -ra ver2 00:34:17.126 10:28:45 -- scripts/common.sh@338 -- $ local 'op=<' 00:34:17.126 10:28:45 -- scripts/common.sh@340 -- $ ver1_l=2 00:34:17.126 10:28:45 -- scripts/common.sh@341 -- $ ver2_l=1 00:34:17.126 10:28:45 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:34:17.126 10:28:45 -- scripts/common.sh@344 -- $ case "$op" in 00:34:17.126 10:28:45 -- scripts/common.sh@345 -- $ : 1 00:34:17.126 10:28:45 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:34:17.126 10:28:45 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.126 10:28:45 -- scripts/common.sh@365 -- $ decimal 1 00:34:17.126 10:28:45 -- scripts/common.sh@353 -- $ local d=1 00:34:17.126 10:28:45 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:17.126 10:28:45 -- scripts/common.sh@355 -- $ echo 1 00:34:17.126 10:28:45 -- scripts/common.sh@365 -- $ ver1[v]=1 00:34:17.126 10:28:45 -- scripts/common.sh@366 -- $ decimal 2 00:34:17.126 10:28:45 -- scripts/common.sh@353 -- $ local d=2 00:34:17.126 10:28:45 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:17.126 10:28:45 -- scripts/common.sh@355 -- $ echo 2 00:34:17.126 10:28:45 -- scripts/common.sh@366 -- $ ver2[v]=2 00:34:17.126 10:28:45 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:34:17.126 10:28:45 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:34:17.126 10:28:45 -- scripts/common.sh@368 -- $ return 0 00:34:17.126 10:28:45 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.126 10:28:45 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:34:17.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.126 --rc genhtml_branch_coverage=1 00:34:17.126 --rc genhtml_function_coverage=1 00:34:17.126 --rc genhtml_legend=1 00:34:17.126 --rc geninfo_all_blocks=1 00:34:17.126 --rc geninfo_unexecuted_blocks=1 00:34:17.126 00:34:17.126 ' 00:34:17.126 10:28:45 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:34:17.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.126 --rc genhtml_branch_coverage=1 00:34:17.126 --rc genhtml_function_coverage=1 00:34:17.126 --rc genhtml_legend=1 00:34:17.126 --rc geninfo_all_blocks=1 00:34:17.126 --rc geninfo_unexecuted_blocks=1 00:34:17.126 00:34:17.126 ' 00:34:17.126 10:28:45 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:34:17.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.126 --rc genhtml_branch_coverage=1 00:34:17.126 --rc genhtml_function_coverage=1 00:34:17.126 --rc genhtml_legend=1 00:34:17.126 --rc geninfo_all_blocks=1 00:34:17.126 --rc geninfo_unexecuted_blocks=1 00:34:17.126 00:34:17.126 ' 00:34:17.126 10:28:45 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:34:17.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.126 --rc genhtml_branch_coverage=1 00:34:17.126 --rc genhtml_function_coverage=1 00:34:17.126 --rc genhtml_legend=1 00:34:17.126 --rc geninfo_all_blocks=1 00:34:17.126 --rc geninfo_unexecuted_blocks=1 00:34:17.126 00:34:17.126 ' 00:34:17.126 10:28:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:17.126 10:28:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:17.126 10:28:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:17.126 10:28:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.126 10:28:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.126 10:28:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.126 10:28:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.126 10:28:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.126 10:28:45 -- paths/export.sh@5 -- $ export PATH 00:34:17.126 10:28:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.126 10:28:45 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:17.126 10:28:45 -- common/autobuild_common.sh@479 -- $ date +%s 00:34:17.126 10:28:45 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1730629725.XXXXXX 00:34:17.126 10:28:45 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1730629725.n7scWB 00:34:17.126 10:28:45 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:34:17.126 10:28:45 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:34:17.126 10:28:45 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:34:17.126 10:28:45 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:34:17.126 10:28:45 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:17.126 10:28:45 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:17.126 10:28:45 -- common/autobuild_common.sh@495 -- $ get_config_params 00:34:17.126 10:28:45 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:34:17.126 10:28:45 -- common/autotest_common.sh@10 -- $ set +x 00:34:17.126 10:28:45 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:34:17.126 10:28:45 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:34:17.126 10:28:45 -- pm/common@17 -- $ local monitor 00:34:17.126 10:28:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:17.126 10:28:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:17.126 10:28:45 -- pm/common@25 -- $ sleep 1 00:34:17.126 10:28:45 -- pm/common@21 -- $ date +%s 00:34:17.126 10:28:45 -- pm/common@21 -- $ date +%s 00:34:17.126 10:28:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1730629725 00:34:17.126 10:28:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1730629725 00:34:17.126 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1730629725_collect-cpu-load.pm.log 00:34:17.126 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1730629725_collect-vmstat.pm.log 00:34:18.068 10:28:46 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:34:18.068 10:28:46 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:34:18.068 10:28:46 -- spdk/autopackage.sh@14 -- $ timing_finish 00:34:18.068 10:28:46 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:18.068 10:28:46 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:18.068 10:28:46 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:18.068 10:28:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:18.068 10:28:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:18.068 10:28:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:18.069 10:28:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:18.069 10:28:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:18.069 10:28:46 -- pm/common@44 -- $ pid=97549 00:34:18.069 10:28:46 -- pm/common@50 -- $ kill -TERM 97549 00:34:18.069 10:28:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:18.069 10:28:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:18.069 10:28:46 -- pm/common@44 -- $ pid=97551 00:34:18.069 10:28:46 -- pm/common@50 -- $ kill -TERM 97551 00:34:18.069 + [[ -n 5765 ]] 00:34:18.069 + sudo kill 5765 00:34:18.080 [Pipeline] } 00:34:18.096 [Pipeline] // timeout 00:34:18.101 [Pipeline] } 00:34:18.115 [Pipeline] // stage 00:34:18.121 [Pipeline] } 00:34:18.135 [Pipeline] // catchError 00:34:18.145 [Pipeline] stage 00:34:18.147 [Pipeline] { (Stop VM) 00:34:18.160 [Pipeline] sh 00:34:18.446 + vagrant halt 00:34:20.985 ==> default: Halting domain... 00:34:26.288 [Pipeline] sh 00:34:26.571 + vagrant destroy -f 00:34:29.112 ==> default: Removing domain... 00:34:29.696 [Pipeline] sh 00:34:29.980 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:29.990 [Pipeline] } 00:34:30.005 [Pipeline] // stage 00:34:30.010 [Pipeline] } 00:34:30.024 [Pipeline] // dir 00:34:30.030 [Pipeline] } 00:34:30.044 [Pipeline] // wrap 00:34:30.051 [Pipeline] } 00:34:30.065 [Pipeline] // catchError 00:34:30.074 [Pipeline] stage 00:34:30.077 [Pipeline] { (Epilogue) 00:34:30.090 [Pipeline] sh 00:34:30.375 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:35.661 [Pipeline] catchError 00:34:35.663 [Pipeline] { 00:34:35.676 [Pipeline] sh 00:34:35.960 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:35.960 Artifacts sizes are good 00:34:35.971 [Pipeline] } 00:34:35.985 [Pipeline] // catchError 00:34:35.997 [Pipeline] archiveArtifacts 00:34:36.005 Archiving artifacts 00:34:36.104 [Pipeline] cleanWs 00:34:36.116 [WS-CLEANUP] Deleting project workspace... 00:34:36.116 [WS-CLEANUP] Deferred wipeout is used... 00:34:36.122 [WS-CLEANUP] done 00:34:36.124 [Pipeline] } 00:34:36.139 [Pipeline] // stage 00:34:36.144 [Pipeline] } 00:34:36.158 [Pipeline] // node 00:34:36.164 [Pipeline] End of Pipeline 00:34:36.204 Finished: SUCCESS